Sample records for reconstruction method called

  1. Petz recovery versus matrix reconstruction

    NASA Astrophysics Data System (ADS)

    Holzäpfel, Milan; Cramer, Marcus; Datta, Nilanjana; Plenio, Martin B.

    2018-04-01

    The reconstruction of the state of a multipartite quantum mechanical system represents a fundamental task in quantum information science. At its most basic, it concerns a state of a bipartite quantum system whose subsystems are subjected to local operations. We compare two different methods for obtaining the original state from the state resulting from the action of these operations. The first method involves quantum operations called Petz recovery maps, acting locally on the two subsystems. The second method is called matrix (or state) reconstruction and involves local, linear maps that are not necessarily completely positive. Moreover, we compare the quantities on which the maps employed in the two methods depend. We show that any state that admits Petz recovery also admits state reconstruction. However, the latter is successful for a strictly larger set of states. We also compare these methods in the context of a finite spin chain. Here, the state of a finite spin chain is reconstructed from the reduced states of a few neighbouring spins. In this setting, state reconstruction is the same as the matrix product operator reconstruction proposed by Baumgratz et al. [Phys. Rev. Lett. 111, 020401 (2013)]. Finally, we generalize both these methods so that they employ long-range measurements instead of relying solely on short-range correlations embodied in such local reduced states. Long-range measurements enable the reconstruction of states which cannot be reconstructed from measurements of local few-body observables alone and hereby we improve existing methods for quantum state tomography of quantum many-body systems.

  2. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  3. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  4. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  5. Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction

    PubMed Central

    Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark

    2017-01-01

    Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961

  6. Dictionary-learning-based reconstruction method for electron tomography.

    PubMed

    Liu, Baodong; Yu, Hengyong; Verbridge, Scott S; Sun, Lizhi; Wang, Ge

    2014-01-01

    Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.

  7. Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor

    NASA Astrophysics Data System (ADS)

    Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny

    2014-08-01

    We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.

  8. Image reconstruction of muon tomographic data using a density-based clustering method

    NASA Astrophysics Data System (ADS)

    Perry, Kimberly B.

    Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.

  9. Fast ancestral gene order reconstruction of genomes with unequal gene content.

    PubMed

    Feijão, Pedro; Araujo, Eloi

    2016-11-11

    During evolution, genomes are modified by large scale structural events, such as rearrangements, deletions or insertions of large blocks of DNA. Of particular interest, in order to better understand how this type of genomic evolution happens, is the reconstruction of ancestral genomes, given a phylogenetic tree with extant genomes at its leaves. One way of solving this problem is to assume a rearrangement model, such as Double Cut and Join (DCJ), and find a set of ancestral genomes that minimizes the number of events on the input tree. Since this problem is NP-hard for most rearrangement models, exact solutions are practical only for small instances, and heuristics have to be used for larger datasets. This type of approach can be called event-based. Another common approach is based on finding conserved structures between the input genomes, such as adjacencies between genes, possibly also assigning weights that indicate a measure of confidence or probability that this particular structure is present on each ancestral genome, and then finding a set of non conflicting adjacencies that optimize some given function, usually trying to maximize total weight and minimizing character changes in the tree. We call this type of methods homology-based. In previous work, we proposed an ancestral reconstruction method that combines homology- and event-based ideas, using the concept of intermediate genomes, that arise in DCJ rearrangement scenarios. This method showed better rate of correctly reconstructed adjacencies than other methods, while also being faster, since the use of intermediate genomes greatly reduces the search space. Here, we generalize the intermediate genome concept to genomes with unequal gene content, extending our method to account for gene insertions and deletions of any length. In many of the simulated datasets, our proposed method had better results than MLGO and MGRA, two state-of-the-art algorithms for ancestral reconstruction with unequal gene content, while running much faster, making it more scalable to larger datasets. Studing ancestral reconstruction problems under a new light, using the concept of intermediate genomes, allows the design of very fast algorithms by greatly reducing the solution search space, while also giving very good results. The algorithms introduced in this paper were implemented in an open-source software called RINGO (ancestral Reconstruction with INtermediate GenOmes), available at https://github.com/pedrofeijao/RINGO .

  10. Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs

    PubMed Central

    Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel

    2012-01-01

    Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023

  11. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  12. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  13. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    PubMed

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Reconstruction of interatomic vectors by principle component analysis of nuclear magnetic resonance data in multiple alignments

    NASA Astrophysics Data System (ADS)

    Hus, Jean-Christophe; Bruschweiler, Rafael

    2002-07-01

    A general method is presented for the reconstruction of interatomic vector orientations from nuclear magnetic resonance (NMR) spectroscopic data of tensor interactions of rank 2, such as dipolar coupling and chemical shielding anisotropy interactions, in solids and partially aligned liquid-state systems. The method, called PRIMA, is based on a principal component analysis of the covariance matrix of the NMR parameters collected for multiple alignments. The five nonzero eigenvalues and their eigenvectors efficiently allow the approximate reconstruction of the vector orientations of the underlying interactions. The method is demonstrated for an isotropic distribution of sample orientations as well as for finite sets of orientations and internuclear vectors encountered in protein systems.

  15. Fast implementations of reconstruction-based scatter compensation in fully 3D SPECT image reconstruction

    NASA Astrophysics Data System (ADS)

    Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.

    1998-04-01

    Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with tracer, and also using experimentally acquired data with tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for image reconstruction).

  16. Automated Probabilistic Reconstruction of White-Matter Pathways in Health and Disease Using an Atlas of the Underlying Anatomy

    PubMed Central

    Yendiki, Anastasia; Panneck, Patricia; Srinivasan, Priti; Stevens, Allison; Zöllei, Lilla; Augustinack, Jean; Wang, Ruopeng; Salat, David; Ehrlich, Stefan; Behrens, Tim; Jbabdi, Saad; Gollub, Randy; Fischl, Bruce

    2011-01-01

    We have developed a method for automated probabilistic reconstruction of a set of major white-matter pathways from diffusion-weighted MR images. Our method is called TRACULA (TRActs Constrained by UnderLying Anatomy) and utilizes prior information on the anatomy of the pathways from a set of training subjects. By incorporating this prior knowledge in the reconstruction procedure, our method obviates the need for manual interaction with the tract solutions at a later stage and thus facilitates the application of tractography to large studies. In this paper we illustrate the application of the method on data from a schizophrenia study and investigate whether the inclusion of both patients and healthy subjects in the training set affects our ability to reconstruct the pathways reliably. We show that, since our method does not constrain the exact spatial location or shape of the pathways but only their trajectory relative to the surrounding anatomical structures, a set a of healthy training subjects can be used to reconstruct the pathways accurately in patients as well as in controls. PMID:22016733

  17. Automatic face naming by learning discriminative affinity matrices from weakly labeled images.

    PubMed

    Xiao, Shijie; Xu, Dong; Wu, Jianxin

    2015-10-01

    Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.

  18. Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, R.

    1991-01-01

    An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.

  19. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  20. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE PAGES

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin; ...

    2017-07-08

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  1. Advanced Imaging Methods for Long-Baseline Optical Interferometry

    NASA Astrophysics Data System (ADS)

    Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.

    2008-11-01

    We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.

  2. Vitis Phylogenomics: Hybridization Intensities from a SNP Array Outperform Genotype Calls

    PubMed Central

    Miller, Allison J.; Matasci, Naim; Schwaninger, Heidi; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Simon, Charles; Buckler, Edward S.; Myles, Sean

    2013-01-01

    Understanding relationships among species is a fundamental goal of evolutionary biology. Single nucleotide polymorphisms (SNPs) identified through next generation sequencing and related technologies enable phylogeny reconstruction by providing unprecedented numbers of characters for analysis. One approach to SNP-based phylogeny reconstruction is to identify SNPs in a subset of individuals, and then to compile SNPs on an array that can be used to genotype additional samples at hundreds or thousands of sites simultaneously. Although powerful and efficient, this method is subject to ascertainment bias because applying variation discovered in a representative subset to a larger sample favors identification of SNPs with high minor allele frequencies and introduces bias against rare alleles. Here, we demonstrate that the use of hybridization intensity data, rather than genotype calls, reduces the effects of ascertainment bias. Whereas traditional SNP calls assess known variants based on diversity housed in the discovery panel, hybridization intensity data survey variation in the broader sample pool, regardless of whether those variants are present in the initial SNP discovery process. We apply SNP genotype and hybridization intensity data derived from the Vitis9kSNP array developed for grape to show the effects of ascertainment bias and to reconstruct evolutionary relationships among Vitis species. We demonstrate that phylogenies constructed using hybridization intensities suffer less from the distorting effects of ascertainment bias, and are thus more accurate than phylogenies based on genotype calls. Moreover, we reconstruct the phylogeny of the genus Vitis using hybridization data, show that North American subgenus Vitis species are monophyletic, and resolve several previously poorly known relationships among North American species. This study builds on earlier work that applied the Vitis9kSNP array to evolutionary questions within Vitis vinifera and has general implications for addressing ascertainment bias in array-enabled phylogeny reconstruction. PMID:24236035

  3. Time-based Reconstruction of Free-streaming Data in CBM

    NASA Astrophysics Data System (ADS)

    Akishina, Valentina; Kisel, Ivan; Vassiliev, Iouri; Zyzak, Maksym

    2018-02-01

    Traditional latency-limited trigger architectures typical for conventional experiments are inapplicable for the CBM experiment. Instead, CBM will ship and collect time-stamped data into a readout buffer in a form of a time-slice of a certain length and deliver it to a large computer farm, where online event reconstruction and selection will be performed. Grouping measurements into physical collisions must be performed in software and requires reconstruction not only in space, but also in time, the so-called 4-dimensional track reconstruction and event building. The tracks, reconstructed with 4D Cellular Automaton track finder, are combined into event-corresponding clusters according to the estimated time in the target position and the errors, obtained with the Kalman Filter method. The reconstructed events are given as inputs to the KF Particle Finder package for short-lived particle reconstruction. The results of time-based reconstruction of simulated collisions in CBM are presented and discussed in details.

  4. Sea level reconstructions from altimetry and tide gauges using independent component analysis

    NASA Astrophysics Data System (ADS)

    Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan

    2017-04-01

    Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.

  5. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luqing Luo; Robert Nourgaliev

    2010-09-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier–Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier–Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need tomore » judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi–Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier–Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier–Stokes equations.« less

  6. A Reconstructed Discontinuous Galerkin Method for the Compressible Navier-Stokes Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luqing Luo; Robert Nourgaliev

    2010-01-01

    A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need tomore » judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.« less

  7. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    PubMed

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  8. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  9. Reconstructing metastatic seeding patterns of human cancers

    PubMed Central

    Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.

    2017-01-01

    Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641

  10. LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI

    PubMed Central

    Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A

    2016-01-01

    Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658

  11. Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, Remi

    1992-01-01

    An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.

  12. Background oriented schlieren in a density stratified fluid.

    PubMed

    Verso, Lilly; Liberzon, Alex

    2015-10-01

    Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.

  13. Three-dimensional focus of attention for iterative cone-beam micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Benson, T. M.; Gregor, J.

    2006-09-01

    Three-dimensional iterative reconstruction of high-resolution, circular orbit cone-beam x-ray CT data is often considered impractical due to the demand for vast amounts of computer cycles and associated memory. In this paper, we show that the computational burden can be reduced by limiting the reconstruction to a small, well-defined portion of the image volume. We first discuss using the support region defined by the set of voxels covered by all of the projection views. We then present a data-driven preprocessing technique called focus of attention that heuristically separates both image and projection data into object and background before reconstruction, thereby further reducing the reconstruction region of interest. We present experimental results for both methods based on mouse data and a parallelized implementation of the SIRT algorithm. The computational savings associated with the support region are substantial. However, the results for focus of attention are even more impressive in that only about one quarter of the computer cycles and memory are needed compared with reconstruction of the entire image volume. The image quality is not compromised by either method.

  14. Compressed sensing and the reconstruction of ultrafast 2D NMR data: Principles and biomolecular applications.

    PubMed

    Shrot, Yoav; Frydman, Lucio

    2011-04-01

    A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    PubMed Central

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  16. More IMPATIENT: A Gridding-Accelerated Toeplitz-based Strategy for Non-Cartesian High-Resolution 3D MRI on GPUs

    PubMed Central

    Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.

    2013-01-01

    Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203

  17. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  18. Super-Resolution Image Reconstruction Applied to Medical Ultrasound

    NASA Astrophysics Data System (ADS)

    Ellis, Michael

    Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in vivo images of a human testicle. In all instances, the methods presented here outperform conventional image reconstruction methods by a significant margin. As TONE and its variants are general image reconstruction techniques, the theories and research presented here have the potential to significantly improve not only ultrasound's clinical utility, but that of other imaging modalities as well.

  19. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    NASA Astrophysics Data System (ADS)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  20. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  1. Efficient and robust method for simultaneous reconstruction of the temperature distribution and radiative properties in absorbing, emitting, and scattering media

    NASA Astrophysics Data System (ADS)

    Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Tan, He-Ping

    2016-11-01

    A rapid computational method called generalized sourced multi-flux method (GSMFM) was developed to simulate outgoing radiative intensities in arbitrary directions at the boundary surfaces of absorbing, emitting, and scattering media which were served as input for the inverse analysis. A hybrid least-square QR decomposition-stochastic particle swarm optimization (LSQR-SPSO) algorithm based on the forward GSMFM solution was developed to simultaneously reconstruct multi-dimensional temperature distribution and absorption and scattering coefficients of the cylindrical participating media. The retrieval results for axisymmetric temperature distribution and non-axisymmetric temperature distribution indicated that the temperature distribution and scattering and absorption coefficients could be retrieved accurately using the LSQR-SPSO algorithm even with noisy data. Moreover, the influences of extinction coefficient and scattering albedo on the accuracy of the estimation were investigated, and the results suggested that the reconstruction accuracy decreased with the increase of extinction coefficient and the scattering albedo. Finally, a non-contact measurement platform of flame temperature field based on the light field imaging was set up to validate the reconstruction model experimentally.

  2. Three-dimensional electrical impedance tomography based on the complete electrode model.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Savolainen, T; Kaipio, J P

    1999-09-01

    In electrical impedance tomography an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. It is often assumed that the injected currents are confined to the two-dimensional (2-D) electrode plane and the reconstruction is based on 2-D assumptions. However, the currents spread out in three dimensions and, therefore, off-plane structures have significant effect on the reconstructed images. In this paper we propose a finite element-based method for the reconstruction of three-dimensional resistivity distributions. The proposed method is based on the so-called complete electrode model that takes into account the presence of the electrodes and the contact impedances. Both the forward and the inverse problems are discussed and results from static and dynamic (difference) reconstructions with real measurement data are given. It is shown that in phantom experiments with accurate finite element computations it is possible to obtain static images that are comparable with difference images that are reconstructed from the same object with the empty (saline filled) tank as a reference.

  3. Reconstructing genealogies of serial samples under the assumption of a molecular clock using serial-sample UPGMA.

    PubMed

    Drummond, A; Rodrigo, A G

    2000-12-01

    Reconstruction of evolutionary relationships from noncontemporaneous molecular samples provides a new challenge for phylogenetic reconstruction methods. With recent biotechnological advances there has been an increase in molecular sequencing throughput, and the potential to obtain serial samples of sequences from populations, including rapidly evolving pathogens, is fast being realized. A new method called the serial-sample unweighted pair grouping method with arithmetic means (sUPGMA) is presented that reconstructs a genealogy or phylogeny of sequences sampled serially in time using a matrix of pairwise distances. The resulting tree depicts the terminal lineages of each sample ending at a different level consistent with the sample's temporal order. Since sUPGMA is a variant of UPGMA, it will perform best when sequences have evolved at a constant rate (i.e., according to a molecular clock). On simulated data, this new method performs better than standard cluster analysis under a variety of longitudinal sampling strategies. Serial-sample UPGMA is particularly useful for analysis of longitudinal samples of viruses and bacteria, as well as ancient DNA samples, with the minimal requirement that samples of sequences be ordered in time.

  4. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    PubMed

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  5. Higher order total variation regularization for EIT reconstruction.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  6. Soft x-ray holographic tomography for biological specimens

    NASA Astrophysics Data System (ADS)

    Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan

    2003-10-01

    In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.

  7. Dual-resolution image reconstruction for region-of-interest CT scan

    NASA Astrophysics Data System (ADS)

    Jin, S. O.; Shin, K. Y.; Yoo, S. K.; Kim, J. G.; Kim, K. H.; Huh, Y.; Lee, S. Y.; Kwon, O.-K.

    2014-07-01

    In ordinary CT scan, so called full field-of-view (FFOV) scan, in which the x-ray beam span covers the whole section of the body, a large number of projections are necessary to reconstruct high resolution images. However, excessive x-ray dose is a great concern in FFOV scan. Region-of-interest (ROI) scan is a method to visualize the ROI in high resolution while reducing the x-ray dose. But, ROI scan suffers from bright-band artifacts which may hamper CT-number accuracy. In this study, we propose an image reconstruction method to eliminate the band artifacts in the ROI scan. In addition to the ROI scan with high sampling rate in the view direction, we get FFOV projection data with much lower sampling rate. Then, we reconstruct images in the compressed sensing (CS) framework with dual resolutions, that is, high resolution in the ROI and low resolution outside the ROI. For the dual-resolution image reconstruction, we implemented the dual-CS reconstruction algorithm in which data fidelity and total variation (TV) terms were enforced twice in the framework of adaptive steepest descent projection onto convex sets (ASD-POCS). The proposed method has remarkably reduced the bright-band artifacts at around the ROI boundary, and it has also effectively suppressed the streak artifacts over the entire image. We expect the proposed method can be greatly used for dual-resolution imaging with reducing the radiation dose, artifacts and scan time.

  8. Hybrid x-space: a new approach for MPI reconstruction.

    PubMed

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  9. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  10. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  11. An EGO-like optimization framework for sensor placement optimization in modal analysis

    NASA Astrophysics Data System (ADS)

    Morlier, Joseph; Basile, Aniello; Chiplunkar, Ankit; Charlotte, Miguel

    2018-07-01

    In aircraft design, ground/flight vibration tests are conducted to extract aircraft’s modal parameters (natural frequencies, damping ratios and mode shapes) also known as the modal basis. The main problem in aircraft modal identification is the large number of sensors needed, which increases operational time and costs. The goal of this paper is to minimize the number of sensors by optimizing their locations in order to reconstruct a truncated modal basis of N mode shapes with a high level of accuracy in the reconstruction. There are several methods to solve sensors placement optimization (SPO) problems, but for this case an original approach has been established based on an iterative process for mode shapes reconstruction through an adaptive Kriging metamodeling approach so called efficient global optimization (EGO)-SPO. The main idea in this publication is to solve an optimization problem where the sensors locations are variables and the objective function is defined by maximizing the trace of criteria so called AutoMAC. The results on a 2D wing demonstrate a reduction of sensors by 30% using our EGO-SPO strategy.

  12. Reconstruction and separation of vibratory field using structural holography

    NASA Astrophysics Data System (ADS)

    Chesnais, C.; Totaro, N.; Thomas, J.-H.; Guyader, J.-L.

    2017-02-01

    A method for reconstructing and separating vibratory field on a plate-like structure is presented. The method, called "Structural Holography" is derived from classical Near-field Acoustic Holography (NAH) but in the vibratory domain. In this case, the plate displacement is measured on one-dimensional lines (the holograms) and used to reconstruct the entire two-dimensional displacement field. As a consequence, remote measurements on non directly accessible zones are possible with Structural Holography. Moreover, as it is based on the decomposition of the field into forth and back waves, Structural Holography permits to separate forces in the case of multi-sources excitation. The theoretical background of the Structural Holography method is described first. Then, to illustrate the process and the possibilities of Structural Holography, the academic test case of an infinite plate excited by few point forces is presented. With the principle of vibratory field separation, the displacement fields produced by each point force separately is reconstructed. However, the displacement field is not always meaningful and some additional treatments are mandatory to localize the position of point forces for example. From the simple example of an infinite plate, a post-processing based on the reconstruction of the structural intensity field is thus proposed. Finally, Structural Holography is generalized to finite plates and applied to real experimental measurements

  13. Image-based reconstruction of three-dimensional myocardial infarct geometry for patient-specific modeling of cardiac electrophysiology

    PubMed Central

    Ukwatta, Eranga; Arevalo, Hermenegild; Rajchl, Martin; White, James; Pashakhanloo, Farhad; Prakosa, Adityo; Herzka, Daniel A.; McVeigh, Elliot; Lardo, Albert C.; Trayanova, Natalia A.; Vadakkumpadan, Fijoy

    2015-01-01

    Purpose: Accurate three-dimensional (3D) reconstruction of myocardial infarct geometry is crucial to patient-specific modeling of the heart aimed at providing therapeutic guidance in ischemic cardiomyopathy. However, myocardial infarct imaging is clinically performed using two-dimensional (2D) late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) techniques, and a method to build accurate 3D infarct reconstructions from the 2D LGE-CMR images has been lacking. The purpose of this study was to address this need. Methods: The authors developed a novel methodology to reconstruct 3D infarct geometry from segmented low-resolution (Lo-res) clinical LGE-CMR images. Their methodology employed the so-called logarithm of odds (LogOdds) function to implicitly represent the shape of the infarct in segmented image slices as LogOdds maps. These 2D maps were then interpolated into a 3D image, and the result transformed via the inverse of LogOdds to a binary image representing the 3D infarct geometry. To assess the efficacy of this method, the authors utilized 39 high-resolution (Hi-res) LGE-CMR images, including 36 in vivo acquisitions of human subjects with prior myocardial infarction and 3 ex vivo scans of canine hearts following coronary ligation to induce infarction. The infarct was manually segmented by trained experts in each slice of the Hi-res images, and the segmented data were downsampled to typical clinical resolution. The proposed method was then used to reconstruct 3D infarct geometry from the downsampled images, and the resulting reconstructions were compared with the manually segmented data. The method was extensively evaluated using metrics based on geometry as well as results of electrophysiological simulations of cardiac sinus rhythm and ventricular tachycardia in individual hearts. Several alternative reconstruction techniques were also implemented and compared with the proposed method. Results: The accuracy of the LogOdds method in reconstructing 3D infarct geometry, as measured by the Dice similarity coefficient, was 82.10% ± 6.58%, a significantly higher value than those of the alternative reconstruction methods. Among outcomes of electrophysiological simulations with infarct reconstructions generated by various methods, the simulation results corresponding to the LogOdds method showed the smallest deviation from those corresponding to the manual reconstructions, as measured by metrics based on both activation maps and pseudo-ECGs. Conclusions: The authors have developed a novel method for reconstructing 3D infarct geometry from segmented slices of Lo-res clinical 2D LGE-CMR images. This method outperformed alternative approaches in reproducing expert manual 3D reconstructions and in electrophysiological simulations. PMID:26233186

  14. Image-based reconstruction of three-dimensional myocardial infarct geometry for patient-specific modeling of cardiac electrophysiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ukwatta, Eranga, E-mail: eukwatt1@jhu.edu; Arevalo, Hermenegild; Pashakhanloo, Farhad

    Purpose: Accurate three-dimensional (3D) reconstruction of myocardial infarct geometry is crucial to patient-specific modeling of the heart aimed at providing therapeutic guidance in ischemic cardiomyopathy. However, myocardial infarct imaging is clinically performed using two-dimensional (2D) late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) techniques, and a method to build accurate 3D infarct reconstructions from the 2D LGE-CMR images has been lacking. The purpose of this study was to address this need. Methods: The authors developed a novel methodology to reconstruct 3D infarct geometry from segmented low-resolution (Lo-res) clinical LGE-CMR images. Their methodology employed the so-called logarithm of odds (LogOdds) function to implicitlymore » represent the shape of the infarct in segmented image slices as LogOdds maps. These 2D maps were then interpolated into a 3D image, and the result transformed via the inverse of LogOdds to a binary image representing the 3D infarct geometry. To assess the efficacy of this method, the authors utilized 39 high-resolution (Hi-res) LGE-CMR images, including 36 in vivo acquisitions of human subjects with prior myocardial infarction and 3 ex vivo scans of canine hearts following coronary ligation to induce infarction. The infarct was manually segmented by trained experts in each slice of the Hi-res images, and the segmented data were downsampled to typical clinical resolution. The proposed method was then used to reconstruct 3D infarct geometry from the downsampled images, and the resulting reconstructions were compared with the manually segmented data. The method was extensively evaluated using metrics based on geometry as well as results of electrophysiological simulations of cardiac sinus rhythm and ventricular tachycardia in individual hearts. Several alternative reconstruction techniques were also implemented and compared with the proposed method. Results: The accuracy of the LogOdds method in reconstructing 3D infarct geometry, as measured by the Dice similarity coefficient, was 82.10% ± 6.58%, a significantly higher value than those of the alternative reconstruction methods. Among outcomes of electrophysiological simulations with infarct reconstructions generated by various methods, the simulation results corresponding to the LogOdds method showed the smallest deviation from those corresponding to the manual reconstructions, as measured by metrics based on both activation maps and pseudo-ECGs. Conclusions: The authors have developed a novel method for reconstructing 3D infarct geometry from segmented slices of Lo-res clinical 2D LGE-CMR images. This method outperformed alternative approaches in reproducing expert manual 3D reconstructions and in electrophysiological simulations.« less

  15. Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction

    NASA Astrophysics Data System (ADS)

    Scarnati, Theresa; Gelb, Anne

    2018-04-01

    In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.

  16. Shrink-wrapped isosurface from cross sectional images

    PubMed Central

    Choi, Y. K.; Hahn, J. K.

    2010-01-01

    Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361

  17. Cell-centered high-order hyperbolic finite volume method for diffusion equation on unstructured grids

    NASA Astrophysics Data System (ADS)

    Lee, Euntaek; Ahn, Hyung Taek; Luo, Hong

    2018-02-01

    We apply a hyperbolic cell-centered finite volume method to solve a steady diffusion equation on unstructured meshes. This method, originally proposed by Nishikawa using a node-centered finite volume method, reformulates the elliptic nature of viscous fluxes into a set of augmented equations that makes the entire system hyperbolic. We introduce an efficient and accurate solution strategy for the cell-centered finite volume method. To obtain high-order accuracy for both solution and gradient variables, we use a successive order solution reconstruction: constant, linear, and quadratic (k-exact) reconstruction with an efficient reconstruction stencil, a so-called wrapping stencil. By the virtue of the cell-centered scheme, the source term evaluation was greatly simplified regardless of the solution order. For uniform schemes, we obtain the same order of accuracy, i.e., first, second, and third orders, for both the solution and its gradient variables. For hybrid schemes, recycling the gradient variable information for solution variable reconstruction makes one order of additional accuracy, i.e., second, third, and fourth orders, possible for the solution variable with less computational work than needed for uniform schemes. In general, the hyperbolic method can be an effective solution technique for diffusion problems, but instability is also observed for the discontinuous diffusion coefficient cases, which brings necessity for further investigation about the monotonicity preserving hyperbolic diffusion method.

  18. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  19. 3D imaging of nanomaterials by discrete tomography.

    PubMed

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  20. Numerical reconstruction of unknown Robin inclusions inside a heat conductor by a non-iterative method

    NASA Astrophysics Data System (ADS)

    Nakamura, Gen; Wang, Haibing

    2017-05-01

    Consider the problem of reconstructing unknown Robin inclusions inside a heat conductor from boundary measurements. This problem arises from active thermography and is formulated as an inverse boundary value problem for the heat equation. In our previous works, we proposed a sampling-type method for reconstructing the boundary of the Robin inclusion and gave its rigorous mathematical justification. This method is non-iterative and based on the characterization of the solution to the so-called Neumann- to-Dirichlet map gap equation. In this paper, we give a further investigation of the reconstruction method from both the theoretical and numerical points of view. First, we clarify the solvability of the Neumann-to-Dirichlet map gap equation and establish a relation of its solution to the Green function associated with an initial-boundary value problem for the heat equation inside the Robin inclusion. This naturally provides a way of computing this Green function from the Neumann-to-Dirichlet map and explains what is the input for the linear sampling method. Assuming that the Neumann-to-Dirichlet map gap equation has a unique solution, we also show the convergence of our method for noisy measurements. Second, we give the numerical implementation of the reconstruction method for two-dimensional spatial domains. The measurements for our inverse problem are simulated by solving the forward problem via the boundary integral equation method. Numerical results are presented to illustrate the efficiency and stability of the proposed method. By using a finite sequence of transient input over a time interval, we propose a new sampling method over the time interval by single measurement which is most likely to be practical.

  1. Constructing a cosmological model-independent Hubble diagram of type Ia supernovae with cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.

    2016-02-01

    We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.

  2. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  3. Breast Reconstruction After Mastectomy

    MedlinePlus

    ... reconstruct the breast? In autologous tissue reconstruction, a piece of tissue containing skin, fat, blood vessels, and ... body and used to rebuild the breast. This piece of tissue is called a flap. Different sites ...

  4. Combining multi-atlas segmentation with brain surface estimation

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.

    2016-03-01

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  5. Combining Multi-atlas Segmentation with Brain Surface Estimation.

    PubMed

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-02-27

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  6. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  7. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  8. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  9. Promoting Conceptual Development in Physics Teacher Education: Cognitive-Historical Reconstruction of Electromagnetic Induction Law

    ERIC Educational Resources Information Center

    Mantyla, Terhi

    2013-01-01

    In teaching physics, the history of physics offers fruitful starting points for designing instruction. I introduce here an approach that uses historical cognitive processes to enhance the conceptual development of pre-service physics teachers' knowledge. It applies a method called cognitive-historical approach, introduced to the cognitive sciences…

  10. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array

    NASA Astrophysics Data System (ADS)

    Ai, Lingyu; Kim, Eun-Soo

    2018-03-01

    We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.

  11. Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma

    NASA Astrophysics Data System (ADS)

    Seibert, Stanley; Latorre, Anthony

    2012-03-01

    We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.

  12. ICON: 3D reconstruction with 'missing-information' restoration in biological electron tomography.

    PubMed

    Deng, Yuchen; Chen, Yu; Zhang, Yan; Wang, Shengliu; Zhang, Fa; Sun, Fei

    2016-07-01

    Electron tomography (ET) plays an important role in revealing biological structures, ranging from macromolecular to subcellular scale. Due to limited tilt angles, ET reconstruction always suffers from the 'missing wedge' artifacts, thus severely weakens the further biological interpretation. In this work, we developed an algorithm called Iterative Compressed-sensing Optimized Non-uniform fast Fourier transform reconstruction (ICON) based on the theory of compressed-sensing and the assumption of sparsity of biological specimens. ICON can significantly restore the missing information in comparison with other reconstruction algorithms. More importantly, we used the leave-one-out method to verify the validity of restored information for both simulated and experimental data. The significant improvement in sub-tomogram averaging by ICON indicates its great potential in the future application of high-resolution structural determination of macromolecules in situ. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  14. Charged-particle emission tomography

    PubMed Central

    Ding, Yijun; Caucci, Luca; Barrett, Harrison H.

    2018-01-01

    Purpose Conventional charged-particle imaging techniques —such as autoradiography —provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Methods Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Results Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. Conclusions We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. PMID:28370094

  15. BRDF invariant stereo using light transport constancy.

    PubMed

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  16. Computer tomography of flows external to test models

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1982-01-01

    Computer tomographic techniques for reconstruction of three-dimensional aerodynamic density fields, from interferograms recorded from several different viewing directions were studied. Emphasis is on the case in which an opaque object such as a test model in a wind tunnel obscures significant regions of the interferograms (projection data). A method called the Iterative Convolution Method (ICM), existing methods in which the field is represented by a series expansions, and analysis of real experimental data in the form of aerodynamic interferograms are discussed.

  17. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  18. Consistent cortical reconstruction and multi-atlas brain segmentation.

    PubMed

    Huo, Yuankai; Plassard, Andrew J; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-09-01

    Whole brain segmentation and cortical surface reconstruction are two essential techniques for investigating the human brain. Spatial inconsistences, which can hinder further integrated analyses of brain structure, can result due to these two tasks typically being conducted independently of each other. FreeSurfer obtains self-consistent whole brain segmentations and cortical surfaces. It starts with subcortical segmentation, then carries out cortical surface reconstruction, and ends with cortical segmentation and labeling. However, this "segmentation to surface to parcellation" strategy has shown limitations in various cohorts such as older populations with large ventricles. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. A modification called MaCRUISE(+) is designed to perform well when white matter lesions are present. Comparing to the benchmarks CRUISE and FreeSurfer, the surface accuracy of MaCRUISE and MaCRUISE(+) is validated using two independent datasets with expertly placed cortical landmarks. A third independent dataset with expertly delineated volumetric labels is employed to compare segmentation performance. Finally, 200MR volumetric images from an older adult sample are used to assess the robustness of MaCRUISE and FreeSurfer. The advantages of MaCRUISE are: (1) MaCRUISE constructs self-consistent voxelwise segmentations and cortical surfaces, while MaCRUISE(+) is robust to white matter pathology. (2) MaCRUISE achieves more accurate whole brain segmentations than independently conducting the multi-atlas segmentation. (3) MaCRUISE is comparable in accuracy to FreeSurfer (when FreeSurfer does not exhibit global failures) while achieving greater robustness across an older adult population. MaCRUISE has been made freely available in open source. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Context-specific metabolic networks are consistent with experiments.

    PubMed

    Becker, Scott A; Palsson, Bernhard O

    2008-05-16

    Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  20. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    NASA Astrophysics Data System (ADS)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  1. Automatic reconstruction of fault networks from seismicity catalogs: Three-dimensional optimal anisotropic dynamic clustering

    NASA Astrophysics Data System (ADS)

    Ouillon, G.; Ducorbier, C.; Sornette, D.

    2008-01-01

    We propose a new pattern recognition method that is able to reconstruct the three-dimensional structure of the active part of a fault network using the spatial location of earthquakes. The method is a generalization of the so-called dynamic clustering (or k means) method, that partitions a set of data points into clusters, using a global minimization criterion of the variance of the hypocenters locations about their center of mass. The new method improves on the original k means method by taking into account the full spatial covariance tensor of each cluster in order to partition the data set into fault-like, anisotropic clusters. Given a catalog of seismic events, the output is the optimal set of plane segments that fits the spatial structure of the data. Each plane segment is fully characterized by its location, size, and orientation. The main tunable parameter is the accuracy of the earthquake locations, which fixes the resolution, i.e., the residual variance of the fit. The resolution determines the number of fault segments needed to describe the earthquake catalog: the better the resolution, the finer the structure of the reconstructed fault segments. The algorithm successfully reconstructs the fault segments of synthetic earthquake catalogs. Applied to the real catalog constituted of a subset of the aftershock sequence of the 28 June 1992 Landers earthquake in southern California, the reconstructed plane segments fully agree with faults already known on geological maps or with blind faults that appear quite obvious in longer-term catalogs. Future improvements of the method are discussed, as well as its potential use in the multiscale study of the inner structure of fault zones.

  2. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca

    2015-04-15

    Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less

  3. Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI

    PubMed Central

    Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid

    2017-01-01

    Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329

  4. An ECG signals compression method and its validation using NNs.

    PubMed

    Fira, Catalina Monica; Goras, Liviu

    2008-04-01

    This paper presents a new algorithm for electrocardiogram (ECG) signal compression based on local extreme extraction, adaptive hysteretic filtering and Lempel-Ziv-Welch (LZW) coding. The algorithm has been verified using eight of the most frequent normal and pathological types of cardiac beats and an multi-layer perceptron (MLP) neural network trained with original cardiac patterns and tested with reconstructed ones. Aspects regarding the possibility of using the principal component analysis (PCA) to cardiac pattern classification have been investigated as well. A new compression measure called "quality score," which takes into account both the reconstruction errors and the compression ratio, is proposed.

  5. A configuration space of homologous proteins conserving mutual information and allowing a phylogeny inference based on pair-wise Z-score probabilities.

    PubMed

    Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric

    2005-03-10

    Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.

  6. Noniterative MAP reconstruction using sparse matrix representations.

    PubMed

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  7. Group foliation of finite difference equations

    NASA Astrophysics Data System (ADS)

    Thompson, Robert; Valiquette, Francis

    2018-06-01

    Using the theory of equivariant moving frames, a group foliation method for invariant finite difference equations is developed. This method is analogous to the group foliation of differential equations and uses the symmetry group of the equation to decompose the solution process into two steps, called resolving and reconstruction. Our constructions are performed algorithmically and symbolically by making use of discrete recurrence relations among joint invariants. Applications to invariant finite difference equations that approximate differential equations are given.

  8. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  9. Simulation study on compressive laminar optical tomography for cardiac action potential propagation

    PubMed Central

    Harada, Takumi; Tomii, Naoki; Manago, Shota; Kobayashi, Etsuko; Sakuma, Ichiro

    2017-01-01

    To measure the activity of tissue at the microscopic level, laminar optical tomography (LOT), which is a microscopic form of diffuse optical tomography, has been developed. However, obtaining sufficient recording speed to determine rapidly changing dynamic activity remains major challenges. For a high frame rate of the reconstructed data, we here propose a new LOT method using compressed sensing theory, called compressive laminar optical tomography (CLOT), in which novel digital micromirror device-based illumination and data reduction in a single reconstruction are applied. In the simulation experiments, the reconstructed volumetric images of the action potentials that were acquired from 5 measured images with random pattern featured a wave border at least to a depth of 2.5 mm. Consequently, it was shown that CLOT has potential for over 200 fps required for the cardiac electrophysiological phenomena. PMID:28736675

  10. A Complete Readout Chain of the ATLAS Tile Calorimeter for the HL-LHC: from FATALIC Front-End Electronics to Signal Reconstruction

    NASA Astrophysics Data System (ADS)

    Senkin, Sergey

    2018-01-01

    The ATLAS Collaboration has started a vast programme of upgrades in the context of high-luminosity LHC (HL-LHC) foreseen in 2024. We present here one of the frontend readout options, an ASIC called FATALIC, proposed for the high-luminosity phase LHC upgrade of the ATLAS Tile Calorimeter. Based on a 130 nm CMOS technology, FATALIC performs the complete signal processing, including amplification, shaping and digitisation. We describe the full characterisation of FATALIC and also the Optimal Filtering signal reconstruction method adapted to fully exploit the FATALIC three-range layout. Additionally we present the resolution performance of the whole chain measured using the charge injection system designed for calibration. Finally we discuss the results of the signal reconstruction used on real data collected during a preliminary beam test at CERN.

  11. Detecting and quantifying stellar magnetic fields. Sparse Stokes profile approximation using orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Carroll, T. A.; Strassmeier, K. G.

    2014-03-01

    Context. In recent years, we have seen a rapidly growing number of stellar magnetic field detections for various types of stars. Many of these magnetic fields are estimated from spectropolarimetric observations (Stokes V) by using the so-called center-of-gravity (COG) method. Unfortunately, the accuracy of this method rapidly deteriorates with increasing noise and thus calls for a more robust procedure that combines signal detection and field estimation. Aims: We introduce an estimation method that provides not only the effective or mean longitudinal magnetic field from an observed Stokes V profile but also uses the net absolute polarization of the profile to obtain an estimate of the apparent (i.e., velocity resolved) absolute longitudinal magnetic field. Methods: By combining the COG method with an orthogonal-matching-pursuit (OMP) approach, we were able to decompose observed Stokes profiles with an overcomplete dictionary of wavelet-basis functions to reliably reconstruct the observed Stokes profiles in the presence of noise. The elementary wave functions of the sparse reconstruction process were utilized to estimate the effective longitudinal magnetic field and the apparent absolute longitudinal magnetic field. A multiresolution analysis complements the OMP algorithm to provide a robust detection and estimation method. Results: An extensive Monte-Carlo simulation confirms the reliability and accuracy of the magnetic OMP approach where a mean error of under 2% is found. Its full potential is obtained for heavily noise-corrupted Stokes profiles with signal-to-noise variance ratios down to unity. In this case a conventional COG method yields a mean error for the effective longitudinal magnetic field of up to 50%, whereas the OMP method gives a maximum error of 18%. It is, moreover, shown that even in the case of very small residual noise on a level between 10-3 and 10-5, a regime reached by current multiline reconstruction techniques, the conventional COG method incorrectly interprets a large portion of the residual noise as a magnetic field, with values of up to 100 G. The magnetic OMP method, on the other hand, remains largely unaffected by the noise, regardless of the noise level the maximum error is no greater than 0.7 G.

  12. Mastectomy

    MedlinePlus

    ... reconstruction is a complex procedure performed by a plastic surgeon, also called a reconstructive surgeon. If you' ... as a mastectomy, you'll meet with the plastic surgeon before the surgery. Preparing for your surgery ...

  13. Bessel Fourier Orientation Reconstruction (BFOR): An Analytical Diffusion Propagator Reconstruction for Hybrid Diffusion Imaging and Computation of q-Space Indices

    PubMed Central

    Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853

  14. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces sparser coefficients than the original SPF basis and results in significantly lower reconstruction error than Bilgic et al.'s method.

  15. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  16. Integration of prior CT into CBCT reconstruction for improved image quality via reconstruction of difference: first patient studies

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Gang, Grace J.; Lee, Junghoon; Wong, John; Stayman, J. Webster

    2017-03-01

    Purpose: There are many clinical situations where diagnostic CT is used for an initial diagnosis or treatment planning, followed by one or more CBCT scans that are part of an image-guided intervention. Because the high-quality diagnostic CT scan is a rich source of patient-specific anatomical knowledge, this provides an opportunity to incorporate the prior CT image into subsequent CBCT reconstruction for improved image quality. We propose a penalized-likelihood method called reconstruction of difference (RoD), to directly reconstruct differences between the CBCT scan and the CT prior. In this work, we demonstrate the efficacy of RoD with clinical patient datasets. Methods: We introduce a data processing workflow using the RoD framework to reconstruct anatomical changes between the prior CT and current CBCT. This workflow includes processing steps to account for non-anatomical differences between the two scans including 1) scatter correction for CBCT datasets due to increased scatter fractions in CBCT data; 2) histogram matching for attenuation variations between CT and CBCT; and 3) registration for different patient positioning. CBCT projection data and CT planning volumes for two radiotherapy patients - one abdominal study and one head-and-neck study - were investigated. Results: In comparisons between the proposed RoD framework and more traditional FDK and penalized-likelihood reconstructions, we find a significant improvement in image quality when prior CT information is incorporated into the reconstruction. RoD is able to provide additional low-contrast details while correctly incorporating actual physical changes in patient anatomy. Conclusions: The proposed framework provides an opportunity to either improve image quality or relax data fidelity constraints for CBCT imaging when prior CT studies of the same patient are available. Possible clinical targets include CBCT image-guided radiotherapy and CBCT image-guided surgeries.

  17. Surface Temperature Reconstructions for the Last 1000 Years

    NASA Astrophysics Data System (ADS)

    North, G. R.

    2006-12-01

    This is a presentation of results from a recently released report written by a committee established by the National Research Council and chaired by the speaker. The report was titled the same as the title of this talk. It focused on the methods of reconstructing the large scales of such surface temperature fields, since there has been considerable discussion in the scientific literature, assessments such as the IPCC, the popular press, blogs and even Congressional Hearings. The so-called `hockey stick' curve indicating a gradual cooling from the beginning of the record at about 1000AD to roughly 150 years ago when the curve take a steep upward trend (the so-called global warming). The original publications by Mann, Bradley and Hughes were careful to present and emphasize error margins that have been ignored by many in the controversy. The Committee found that numerous subsequent publications have reported reconstructions that utilized different data and different statistical assumptions. These all fall within the error margins of the original studies. While the committee has some reservations about the period prior to the year 1600AD, it still concludes that it is plausible that surface temperatures averaged over the Northern Hemisphere over the last three decades are plausibly the warmest for any such comparable period in the last 1000 years.

  18. [Technical information: The 'so called' Camille Bernard lower lip reconstruction: An eponymous confusion clarified].

    PubMed

    Marck, K W; Martin, D

    2017-12-01

    The use of eponyms honours those who have contributed to the development of medicine and facilitates communication between colleagues. Eponyms are based on historical knowledge to know who was the first to use a given technique. In the previous century, two different operative procedures have been attached to the 'so called' Bernard lower lip reconstruction. This historical literature on lip reconstruction with a focus on the years 1853-1855 elucidates the roles of Bernard, Saeman, Desgranges and Burow, and gives suggestions for eponyms that do justice to the innovating surgeons Bernard, Burow and Desgranges. Copyright © 2017. Published by Elsevier Masson SAS.

  19. [Revealing three psychological states before an acting out in 32 patients hospitalized for suicide attempt].

    PubMed

    Vandevoorde, J

    2013-09-01

    The purpose of this study was to reconstruct the psychological state of suicidal subjects at the time of the execution of the gesture according to their thoughts, their emotions, their actions, their fantasy life and consciousness. Thirty-three adult subjects agreed, just days after their suicide attempt, to answer the Interview Method for Suicidal Acts (IMSA). This object of this semi-structured interview is to invite the suicidal to reconstruct mentally and chronologically their suicide attempt. IMSA can follow the thoughts, behavior, consciousness, emotions and activity of the suicidal scenario by helping the patient to reconstruct the phenomenology of his/her actions until the final suicidal gesture. The data were processed using the method of Classification TwoStep on SPSS, based on Schwarz Bayesian criterion. The results highlight three main types of psychological state: (1) a "kinesthetic" psychological state (called "type K") is characterized by a rupture between the subjective sensation of motor movement and effective motility (motor automatism), the presence of a dissociative state, an "empty" feeling of thought and the absence of an external triggering factor; (2) a "cognitive" psychological state (called "type C") is characterized by a significant reflection on the decision to die and infiltration of the morbid thought, an intense fantasy life around the suicidal scenario, a clear state of consciousness, and an absence of loss of motor control; (3) an "emotional" psychological state (called "type E") is characterized by confusing and chaotic emotional processes, the emergence of a dissociative state, and a significant impact of external events on the onset of the suicide attempt. This classification of suicide attempts allows us to identify the different combinations of the suicidal process and opens up new therapeutic strategies. Copyright © 2013 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  20. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  1. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  2. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  3. Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.

    PubMed

    Germino, Mary; Carson, Richard E

    2018-02-01

    Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.

  4. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  5. Compressive Sampling Based Interior Reconstruction for Dynamic Carbon Nanotube Micro-CT

    PubMed Central

    Yu, Hengyong; Cao, Guohua; Burk, Laurel; Lee, Yueh; Lu, Jianping; Santago, Pete; Zhou, Otto; Wang, Ge

    2010-01-01

    In the computed tomography (CT) field, one recent invention is the so-called carbon nanotube (CNT) based field emission x-ray technology. On the other hand, compressive sampling (CS) based interior tomography is a new innovation. Combining the strengths of these two novel subjects, we apply the interior tomography technique to local mouse cardiac imaging using respiration and cardiac gating with a CNT based micro-CT scanner. The major features of our method are: (1) it does not need exact prior knowledge inside an ROI; and (2) two orthogonal scout projections are employed to regularize the reconstruction. Both numerical simulations and in vivo mouse studies are performed to demonstrate the feasibility of our methodology. PMID:19923686

  6. Recent Advances in X-ray Cone-beam Computed Laminography.

    PubMed

    O'Brien, Neil S; Boardman, Richard P; Sinclair, Ian; Blumensath, Thomas

    2016-10-06

    X-ray computed tomography is an established volume imaging technique used routinely in medical diagnosis, industrial non-destructive testing, and a wide range of scientific fields. Traditionally, computed tomography uses scanning geometries with a single axis of rotation together with reconstruction algorithms specifically designed for this setup. Recently there has however been increasing interest in more complex scanning geometries. These include so called X-ray computed laminography systems capable of imaging specimens with large lateral dimensions or large aspect ratios, neither of which are well suited to conventional CT scanning procedures. Developments throughout this field have thus been rapid, including the introduction of novel system trajectories, the application and refinement of various reconstruction methods, and the use of recently developed computational hardware and software techniques to accelerate reconstruction times. Here we examine the advances made in the last several years and consider their impact on the state of the art.

  7. A user's guide to localization-based super-resolution fluorescence imaging.

    PubMed

    Dempsey, Graham T

    2013-01-01

    Advances in far-field fluorescence microscopy over the past decade have led to the development of super-resolution imaging techniques that provide more than an order of magnitude improvement in spatial resolution compared to conventional light microscopy. One such approach, called Stochastic Optical Reconstruction Microscopy (STORM) uses the sequential, nanometer-scale localization of individual fluorophores to reconstruct a high-resolution image of a structure of interest. This is an attractive method for biological investigation at the nanoscale due to its relative simplicity, both conceptually and practically in the laboratory. Like most research tools, however, the devil is in the details. The aim of this chapter is to serve as a guide for applying STORM to the study of biological samples. This chapter will discuss considerations for choosing a photoswitchable fluorescent probe, preparing a sample, selecting hardware for data acquisition, and collecting and analyzing data for image reconstruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  9. Digital tomosynthesis (DTS) with a Circular X-ray tube: Its image reconstruction based on total-variation minimization and the image characteristics

    NASA Astrophysics Data System (ADS)

    Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.

    2013-09-01

    In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.

  10. Precise signal amplitude retrieval for a non-homogeneous diagnostic beam using complex interferometry approach

    NASA Astrophysics Data System (ADS)

    Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.

    2017-08-01

    Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.

  11. Priori mask guided image reconstruction (p-MGIR) for ultra-low dose cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Kahler, Darren L.; Liu, Chihray; Lu, Bo

    2015-11-01

    Recently, the compressed sensing (CS) based iterative reconstruction method has received attention because of its ability to reconstruct cone beam computed tomography (CBCT) images with good quality using sparsely sampled or noisy projections, thus enabling dose reduction. However, some challenges remain. In particular, there is always a tradeoff between image resolution and noise/streak artifact reduction based on the amount of regularization weighting that is applied uniformly across the CBCT volume. The purpose of this study is to develop a novel low-dose CBCT reconstruction algorithm framework called priori mask guided image reconstruction (p-MGIR) that allows reconstruction of high-quality low-dose CBCT images while preserving the image resolution. In p-MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions: (1) where anatomical structures are complex, and (2) where intensities are relatively uniform. The priori mask, which is the key concept of the p-MGIR algorithm, was defined as the matrix that distinguishes between the two separate CBCT regions where the resolution needs to be preserved and where streak or noise needs to be suppressed. We then alternately updated each part of image by solving two sub-minimization problems iteratively, where one minimization was focused on preserving the edge information of the first part while the other concentrated on the removal of noise/artifacts from the latter part. To evaluate the performance of the p-MGIR algorithm, a numerical head-and-neck phantom, a Catphan 600 physical phantom, and a clinical head-and-neck cancer case were used for analysis. The results were compared with the standard Feldkamp-Davis-Kress as well as conventional CS-based algorithms. Examination of the p-MGIR algorithm showed that high-quality low-dose CBCT images can be reconstructed without compromising the image resolution. For both phantom and the patient cases, the p-MGIR is able to achieve a clinically-reasonable image with 60 projections. Therefore, a clinically-viable, high-resolution head-and-neck CBCT image can be obtained while cutting the dose by 83%. Moreover, the image quality obtained using p-MGIR is better than the quality obtained using other algorithms. In this work, we propose a novel low-dose CBCT reconstruction algorithm called p-MGIR. It can be potentially used as a CBCT reconstruction algorithm with low dose scan requests

  12. Network reconstructions with partially available data

    NASA Astrophysics Data System (ADS)

    Zhang, Chaoyang; Chen, Yang; Hu, Gang

    2017-06-01

    Many practical systems in natural and social sciences can be described by dynamical networks. Day by day we have measured and accumulated huge amounts of data from these networks, which can be used by us to further our understanding of the world. The structures of the networks producing these data are often unknown. Consequently, understanding the structures of these networks from available data turns to be one of the central issues in interdisciplinary fields, which is called the network reconstruction problem. In this paper, we considered problems of network reconstructions using partially available data and some situations where data availabilities are not sufficient for conventional network reconstructions. Furthermore, we proposed to infer subnetwork with data of the subnetwork available only and other nodes of the entire network hidden; to depict group-group interactions in networks with averages of groups of node variables available; and to perform network reconstructions with known data of node variables only when networks are driven by both unknown internal fast-varying noises and unknown external slowly-varying signals. All these situations are expected to be common in practical systems and the methods and results may be useful for real world applications.

  13. Intra-Operative Dosimetry in Prostate Brachytherapy

    DTIC Science & Technology

    2007-11-01

    of the focal spot. 2.1. Model for Reconstruction Space Transformation As illustrated in Figure 8, let A & B ( with reference frames FA & FB) be the two...simplex optimization method in MATLAB 7.0 with the search space being defined by the distortion modes from PCA. A linear combination of the modes would...arm is tracked with an X-ray fiducial system called FTRAC that is composed of optimally selected polynomial

  14. Adaptive multimode signal reconstruction from time–frequency representations

    PubMed Central

    Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick

    2016-01-01

    This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184

  15. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    PubMed

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  16. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  17. Reconstructing genome-wide regulatory network of E. coli using transcriptome data and predicted transcription factor activities

    PubMed Central

    2011-01-01

    Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997

  18. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    PubMed

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  19. A Multi Directional Perfect Reconstruction Filter Bank Designed with 2-D Eigenfilter Approach: Application to Ultrasound Speckle Reduction.

    PubMed

    Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S

    2017-02-01

    B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.

  20. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  1. Representation of photon limited data in emission tomography using origin ensembles

    NASA Astrophysics Data System (ADS)

    Sitek, A.

    2008-06-01

    Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.

  2. SU-D-206-02: Evaluation of Partial Storage of the System Matrix for Cone Beam Computed Tomography Using a GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matenine, D; Cote, G; Mascolo-Fortin, J

    2016-06-15

    Purpose: Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersections between the photons’ trajectories and the object, also called ray-tracing or system matrix computation. This work evaluates different ways to store the system matrix, aiming to reconstruct dense image grids in reasonable time. Methods: We propose an optimized implementation of the Siddon’s algorithm using graphics processing units (GPUs) with a novel data storage scheme. The algorithm computes a part of the system matrix on demand, typically, for one projection angle. The proposed method was enhanced with accelerating options: storage of larger subsets of themore » system matrix, systematic reuse of data via geometric symmetries, an arithmetic-rich parallel code and code configuration via machine learning. It was tested on geometries mimicking a cone beam CT acquisition of a human head. To realistically assess the execution time, the ray-tracing routines were integrated into a regularized Poisson-based reconstruction algorithm. The proposed scheme was also compared to a different approach, where the system matrix is fully pre-computed and loaded at reconstruction time. Results: Fast ray-tracing of realistic acquisition geometries, which often lack spatial symmetry properties, was enabled via the proposed method. Ray-tracing interleaved with projection and backprojection operations required significant additional time. In most cases, ray-tracing was shown to use about 66 % of the total reconstruction time. In absolute terms, tracing times varied from 3.6 s to 7.5 min, depending on the problem size. The presence of geometrical symmetries allowed for non-negligible ray-tracing and reconstruction time reduction. Arithmetic-rich parallel code and machine learning permitted a modest reconstruction time reduction, in the order of 1 %. Conclusion: Partial system matrix storage permitted the reconstruction of higher 3D image grid sizes and larger projection datasets at the cost of additional time, when compared to the fully pre-computed approach. This work was supported in part by the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT). The authors acknowledge partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council of Canada (Grant No. 432290).« less

  3. Extreme 3D reconstruction of the final ROSETTA/PHILAE landing site

    NASA Astrophysics Data System (ADS)

    Capanna, Claire; Jorda, Laurent; Lamy, Philippe; Gesquiere, Gilles; Delmas, Cédric; Durand, Joelle; Garmier, Romain; Gaudon, Philippe; Jurado, Eric

    2016-04-01

    The Philae lander aboard the Rosetta spacecraft successfully landed at the surface of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) after two rebounds on November 12, 2014. The final landing site, now known as « Abydos », has been identified on images acquired by the OSIRIS imaging system onboard the Rosetta orbiter[1]. The available images of Abydos are very limited in number and reveal a very extreme topography containing cliffs and overhangs. Furthermore, the surface is only observed under very high incidence angles of 60° on average, which implies that the images also exhibit lots of cast shadows. This makes it very difficult to reconstruct the 3D topography with standard methods such as photogrammetry or standard clinometry. We apply a new method called ''Multiresolution PhotoClinometry by Deformation'' (MPCD, [2]) to retrieve the 3D topography of the area around Abydos. The method works in two main steps: (i) a DTM of this region is extracted from a low resolution MPCD global shape model of comet 67P/C-G, and (ii) the resulting triangular mesh is progressively deformed at increasing spatial sampling down to 0.25 m in order to match a set of 14 images of Abydos with projected pixel scales between 1 and 8 m. The method used to perform the image matching is a quasi-Newton non-linear optimization method called L-BFGS-b[3] especially suited to large-scale problems. Finally, we also checked the compatibility of the final MPCD digital terrain model with a set of five panoramic images obtained by the CIVA-P instrument aboard Philae[4]. [1] Lamy et al., 2016, submitted. [2] Capanna et al., Three dimensional reconstruction using multiresoluton photoclinometry by deformation, The visual Computer, v. 29(6-8) pp. 825-835, 2013. [3] Morales et al., Remark on "Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization", v.38(1) pp.1-4, ACM Trans. Math. Softw., 2011 [4] Bibring et al., 67P/Churyumov-Gerasimenko surface properties as derived from CIVA panoramic images, Science, v. 349(6247), 2015

  4. P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.

    PubMed

    Young-Rae Cho; Yanan Xin; Speegle, Greg

    2015-01-01

    Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.

  5. The 2016 interferometric imaging beauty contest

    NASA Astrophysics Data System (ADS)

    Sanchez-Bermudez, J.; Thiébaut, E.; Hofmann, K.-H.; Heininger, M.; Schertl, D.; Weigelt, G.; Millour, F.; Schutz, A.; Ferrari, A.; Vannier, M.; Mary, D.; Young, J.

    2016-08-01

    Image reconstruction in optical interferometry has gained considerable importance for astrophysical studies during the last decade. This has been mainly due to improvements in the imaging capabilities of existing interferometers and the expectation of new facilities in the coming years. However, despite the advances made so far, image synthesis in optical interferometry is still an open field of research. Since 2004, the community has organized a biennial contest to formally test the different methods and algorithms for image reconstruction. In 2016, we celebrated the 7th edition of the "Interferometric Imaging Beauty Contest". This initiative represented an open call to participate in the reconstruction of a selected set of simulated targets with a wavelength-dependent morphology as they could be observed by the 2nd generation of VLTI instruments. This contest represents a unique opportunity to benchmark, in a systematic way, the current advances and limitations in the field, as well as to discuss possible future approaches. In this contribution, we summarize: (a) the rules of the 2016 contest; (b) the different data sets used and the selection procedure; (c) the methods and results obtained by each one of the participants; and (d) the metric used to select the best reconstructed images. Finally, we named Karl-Heinz Hofmann and the group of the Max-Planck-Institut fur Radioastronomie as winners of this edition of the contest.

  6. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  7. Environmental constraints and call evolution in torrent-dwelling frogs.

    PubMed

    Goutte, Sandra; Dubois, Alain; Howard, Samuel D; Marquez, Rafael; Rowley, Jodi J L; Dehling, J Maximilian; Grandcolas, Philippe; Rongchuan, Xiong; Legendre, Frédéric

    2016-04-01

    Although acoustic signals are important for communication in many taxa, signal propagation is affected by environmental properties. Strong environmental constraints should drive call evolution, favoring signals with greater transmission distance and content integrity in a given calling habitat. Yet, few empirical studies have verified this prediction, possibly due to a shortcoming in habitat characterization, which is often too broad. Here we assess the potential impact of environmental constraints on the evolution of advertisement call in four groups of torrent-dwelling frogs in the family Ranidae. We reconstruct the evolution of calling site preferences, both broadly categorized and at a finer scale, onto a phylogenetic tree for 148 species with five markers (∼3600 bp). We test models of evolution for six call traits for 79 species with regard to the reconstructed history of calling site preferences and estimate their ancestral states. We find that in spite of existing morphological constraints, vocalizations of torrent-dwelling species are most probably constrained by the acoustic specificities of torrent habitats and particularly their high level of ambient noise. We also show that a fine-scale characterization of calling sites allows a better perception of the impact of environmental constraints on call evolution. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  8. Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology.

    PubMed

    Jolivet, Frédéric; Momey, Fabien; Denis, Loïc; Méès, Loïc; Faure, Nicolas; Grosjean, Nathalie; Pinston, Frédéric; Marié, Jean-Louis; Fournier, Corinne

    2018-04-02

    Reconstruction of phase objects is a central problem in digital holography, whose various applications include microscopy, biomedical imaging, and fluid mechanics. Starting from a single in-line hologram, there is no direct way to recover the phase of the diffracted wave in the hologram plane. The reconstruction of absorbing and phase objects therefore requires the inversion of the non-linear hologram formation model. We propose a regularized reconstruction method that includes several physically-grounded constraints such as bounds on transmittance values, maximum/minimum phase, spatial smoothness or the absence of any object in parts of the field of view. To solve the non-convex and non-smooth optimization problem induced by our modeling, a variable splitting strategy is applied and the closed-form solution of the sub-problem (the so-called proximal operator) is derived. The resulting algorithm is efficient and is shown to lead to quantitative phase estimation on reconstructions of accurate simulations of in-line holograms based on the Mie theory. As our approach is adaptable to several in-line digital holography configurations, we present and discuss the promising results of reconstructions from experimental in-line holograms obtained in two different applications: the tracking of an evaporating droplet (size ∼ 100μm) and the microscopic imaging of bacteria (size ∼ 1μm).

  9. A study of glasses-type color CGH using a color filter considering reduction of blurring

    NASA Astrophysics Data System (ADS)

    Iwami, Saki; Sakamoto, Yuji

    2009-02-01

    We have developed a glasses-type color computer generated hologram (CGH) by using a color filter. The proposed glasses consist of two "lenses" made of overlapping holograms and color filters. The holograms, which are calculated to reconstruct images in each primary color, are divided to small areas, which we called cells, and superimposed on one hologram. In the same way, colors of the filter correspond to the hologram cells. We can configure it very simply without a complex optical system, and the configuration yields a small and light weight system suitable for glasses. When the cell is small enough, the colors are mixed and reconstructed color images are observed. In addition, color expression of reconstruction images improves, too. However, using small cells blurrs reconstructed images because of the following reasons: (1) interference between cells because of the correlation with the cells, and (2) reduction of resolution caused by the size of the cell hologram. We are investigating in order to make a hologram that has high resolution reconstructed color images without ghost images. In this paper, we discuss (1) the details of the proposed glasses-type color CGH, (2) appropriate cell size for an eye system, (3) effects of cell shape on the reconstructed images, and (4) a new method to reduce the blurring of the images.

  10. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.

  11. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  12. Off-axis phase-only holograms of 3D objects using accelerated point-based Fresnel diffraction algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Zhenxiang; Zheng, Huadong; Yu, Yingjie; Asundi, Anand K.

    2017-06-01

    A method for calculating off-axis phase-only holograms of three-dimensional (3D) object using accelerated point-based Fresnel diffraction algorithm (PB-FDA) is proposed. The complex amplitude of the object points on the z-axis in hologram plane is calculated using Fresnel diffraction formula, called principal complex amplitudes (PCAs). The complex amplitudes of those off-axis object points of the same depth can be obtained by 2D shifting of PCAs. In order to improve the calculating speed of the PB-FDA, the convolution operation based on fast Fourier transform (FFT) is used to calculate the holograms rather than using the point-by-point spatial 2D shifting of the PCAs. The shortest recording distance of the PB-FDA is analyzed in order to remove the influence of multiple-order images in reconstructed images. The optimal recording distance of the PB-FDA is also analyzed to improve the quality of reconstructed images. Numerical reconstructions and optical reconstructions with a phase-only spatial light modulator (SLM) show that holographic 3D display is feasible with the proposed algorithm. The proposed PB-FDA can also avoid the influence of the zero-order image introduced by SLM in optical reconstructed images.

  13. A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction

    PubMed Central

    Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim

    2015-01-01

    Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918

  14. Creative display of museum objects within their cultural context

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Osanlou, Ardieshir; Excell, Peter

    2014-02-01

    Most existing holographic display methods concentrate on real object reconstruction, but there is a lack of research on object stories (revealing and presenting histories). To address this challenge, we propose a method, called 4 `ER' (leader, manager, implementer, presenter) to experience and respond objects in a special immersive environment. The key innovation of the 4'ER' method is to introduce the stories (political, historical, etc.) into hard copy holography, so as to synergy art and science for museum objects display. The hologram of an imitation of a blue and white porcelain jar from The Palace Museum, Beijing, China has been made, showing good performance and reflecting different pathway to knowledge.

  15. GREIT: a unified approach to 2D linear EIT reconstruction of lung images.

    PubMed

    Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K

    2009-06-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.

  16. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment

    PubMed Central

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-01-01

    Purpose: Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT/CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. Methods: In this work, we accelerated the Feldcamp–Davis–Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT/CT reconstruction algorithm. Results: Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10−7. Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. Conclusions: An ultrafast, reliable and scalable 4D CBCT/CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment. PMID:22149842

  17. Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures

    NASA Astrophysics Data System (ADS)

    Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

    2010-05-01

    3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

  18. Programmable CGH on photochromic material using DMD

    NASA Astrophysics Data System (ADS)

    Alata, Romain; Pariani, Giorgio; Zamkotsian, Frederic; Lanzoni, Patrick; Bianco, Andrea; Bertarelli, Chiara

    2016-07-01

    Computer Generated Holograms (CGHs) are useful for wavefront shaping and complex optics testing, including aspherical and free-form optics. Today, CGHs are recorded directly with a laser or intermediates masks but allows only recording binary CGHs; binary CGHs are efficient but can reconstruct only pixilated images. We propose to use a Digital Micro-mirror Device (DMD) for writing binary CGHs as well as grayscale CGHs, able to reconstruct fulfilled images. DMD is actually studied at LAM, for generating programmable slit masks in multi-object spectrographs. It is composed of 2048x1080 individually controllable micro-mirrors, with a pitch of 13.68 μm. This is a real-time reconfigurable mask, perfect for recording CGHs. A first setup has been developed for hologram recording, where the DMD is enlightened with a collimated beam and illuminates a photosensible plate through an Offner relay, with a magnification of 1:1. Our set up resolution is 2-3 μm, leading to a CGH resolution equal to the DMD micro mirror size. In order to write and erase CGHs during test procedure or on request, we use a photochromic plate called PUR-GD71-50-ST developed at Politecnico di Milano. It is opaque at rest, and becomes transparent when it is illuminated with visible light, between 500 and 700 nm; then it can be erased by a UV flash. We choose to code the CGHs in equally spaced levels, so called stepped CGH. We recorded up to 1000x1000 pixels CGHs with a contrast greater than 50, knowing that the material is able to reach an ultimate contrast of 1000. A second bench has also been developed, dedicated to the reconstruction of the recorded images with a 632.8nm He-Ne laser beam. Very faithful reconstructions have been obtained. Thanks to our recording and reconstruction set-ups, we have been able to successfully record binary and stepped CGHs, and reconstruct them with a high fidelity, revealing the potential of this method for generating programmable/rewritable stepped CGHs on photochromic materials.

  19. A transversal approach for patch-based label fusion via matrix completion

    PubMed Central

    Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang

    2015-01-01

    Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394

  20. Turboprop: improved PROPELLER imaging.

    PubMed

    Pipe, James G; Zwart, Nicholas

    2006-02-01

    A variant of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI, called turboprop, is introduced. This method employs an oscillating readout gradient during each spin echo of the echo train to collect more lines of data per echo train, which reduces the minimum scan time, motion-related artifact, and specific absorption rate (SAR) while increasing sampling efficiency. It can be applied to conventional fast spin-echo (FSE) imaging; however, this article emphasizes its application in diffusion-weighted imaging (DWI). The method is described and compared with conventional PROPELLER imaging, and clinical images collected with this PROPELLER variant are shown. Copyright 2006 Wiley-Liss, Inc.

  1. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  2. Reconstructing Progressive Education

    ERIC Educational Resources Information Center

    Kaplan, Andy

    2013-01-01

    The work of Colonel Francis W. Parker, the man whom Dewey called "the father of progressive education," provides a starting point for reconstructing the loose ambiguities of progressive education into a coherent social and educational philosophy. Although progressives have claimed their approach is more humane and sensitive to children, we need…

  3. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    PubMed

    Kelly, Steven; Maini, Philip K

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  4. UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.

    PubMed

    Madore, Bruno

    2004-08-01

    This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.

  5. J-substitution algorithm in magnetic resonance electrical impedance tomography (MREIT): phantom experiments for static resistivity images.

    PubMed

    Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun

    2002-06-01

    Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.

  6. AIDA: an adaptive image deconvolution algorithm with application to multi-frame and three-dimensional data

    PubMed Central

    Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.

    2011-01-01

    We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626

  7. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less

  8. Direct single-shot phase retrieval from the diffraction pattern of separated objects

    DOE PAGES

    Leshem, Ben; Xu, Rui; Dallal, Yehonatan; ...

    2016-02-22

    The non-crystallographic phase problem arises in numerous scientific and technological fields. An important application is coherent diffractive imaging. Recent advances in X-ray free-electron lasers allow capturing of the diffraction pattern from a single nanoparticle before it disintegrates, in so-called ‘diffraction before destruction’ experiments. Presently, the phase is reconstructed by iterative algorithms, imposing a non-convex computational challenge, or by Fourier holography, requiring a well-characterized reference field. Here we present a convex scheme for single-shot phase retrieval for two (or more) sufficiently separated objects, demonstrated in two dimensions. In our approach, the objects serve as unknown references to one another, reducing themore » phase problem to a solvable set of linear equations. We establish our method numerically and experimentally in the optical domain and demonstrate a proof-of-principle single-shot coherent diffractive imaging using X-ray free-electron lasers pulses. Lastly, our scheme alleviates several limitations of current methods, offering a new pathway towards direct reconstruction of complex objects.« less

  9. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  10. MacCormack's technique-based pressure reconstruction approach for PIV data in compressible flows with shocks

    NASA Astrophysics Data System (ADS)

    Liu, Shun; Xu, Jinglei; Yu, Kaikai

    2017-06-01

    This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.

  11. McCall Glacier record of Arctic climate change: Interpreting a northern Alaska ice core with regional water isotopes

    NASA Astrophysics Data System (ADS)

    Klein, E. S.; Nolan, M.; McConnell, J.; Sigl, M.; Cherry, J.; Young, J.; Welker, J. M.

    2016-01-01

    We explored modern precipitation and ice core isotope ratios to better understand both modern and paleo climate in the Arctic. Paleoclimate reconstructions require an understanding of how modern synoptic climate influences proxies used in those reconstructions, such as water isotopes. Therefore we measured periodic precipitation samples at Toolik Lake Field Station (Toolik) in the northern foothills of the Brooks Range in the Alaskan Arctic to determine δ18O and δ2H. We applied this multi-decadal local precipitation δ18O/temperature regression to ∼65 years of McCall Glacier (also in the Brooks Range) ice core isotope measurements and found an increase in reconstructed temperatures over the late-20th and early-21st centuries. We also show that the McCall Glacier δ18O isotope record is negatively correlated with the winter bidecadal North Pacific Index (NPI) climate oscillation. McCall Glacier deuterium excess (d-excess, δ2H - 8*δ18O) values display a bidecadal periodicity coherent with the NPI and suggest shifts from more southwestern Bering Sea moisture sources with less sea ice (lower d-excess values) to more northern Arctic Ocean moisture sources with more sea ice (higher d-excess values). Northern ice covered Arctic Ocean McCall Glacier moisture sources are associated with weak Aleutian Low (AL) circulation patterns and the southern moisture sources with strong AL patterns. Ice core d-excess values significantly decrease over the record, coincident with warmer temperatures and a significant reduction in Alaska sea ice concentration, which suggests that ice free northern ocean waters are increasingly serving as terrestrial precipitation moisture sources; a concept recently proposed by modeling studies and also present in Greenland ice core d-excess values during previous transitions to warm periods. This study also shows the efficacy and importance of using ice cores from Arctic valley glaciers in paleoclimate reconstructions.

  12. HERMES: Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy

    PubMed Central

    Chan, Kimberly L.; Puts, Nicolaas A. J.; Schär, Michael; Barker, Peter B.; Edden, Richard A. E.

    2017-01-01

    Purpose To investigate a novel Hadamard-encoded spectral editing scheme and evaluate its performance in simultaneously quantifying N-acetyl aspartate (NAA) and N-acetyl aspartyl glutamate (NAAG) at 3 Tesla. Methods Editing pulses applied according to a Hadamard encoding scheme allow the simultaneous acquisition of multiple metabolites. The method, called HERMES (Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy), was optimized to detect NAA and NAAG simultaneously using density-matrix simulations and validated in phantoms at 3T. In vivo data were acquired in the centrum semiovale of 12 normal subjects. The NAA:NAAG concentration ratio was determined by modeling in vivo data using simulated basis functions. Simulations were also performed for potentially coedited molecules with signals within the detected NAA/NAAG region. Results Simulations and phantom experiments show excellent segregation of NAA and NAAG signals into the intended spectra, with minimal crosstalk. Multiplet patterns show good agreement between simulations and phantom and in vivo data. In vivo measurements show that the relative peak intensities of the NAA and NAAG spectra are consistent with a NAA:NAAG concentration ratio of 4.22:1 in good agreement with literature. Simulations indicate some coediting of aspartate and glutathione near the detected region (editing efficiency: 4.5% and 78.2%, respectively, for the NAAG reconstruction and 5.1% and 19.5%, respectively, for the NAA reconstruction). Conclusion The simultaneous and separable detection of two otherwise overlapping metabolites using HERMES is possible at 3T. PMID:27089868

  13. Proposal for Standardized Tabular Reporting of Observational Surgical Studies Illustrated in a Study on Primary Repair of Bile Duct Injuries.

    PubMed

    Cho, Jai Young; Jaeger, Allison R; Sanford, Dominic E; Fields, Ryan C; Strasberg, Steven M

    2015-09-01

    A standard format for reporting observational surgical studies does not exist. This creates difficulties in comparing studies and in performing synthesis through systematic reviews and meta-analyses. This article proposes a method called "standard tabular reporting" and illustrates its use in a case series of bile duct reconstructions for biliary injuries occurring during cholecystectomy. A database dealing with biliary injuries was constructed in sections. Each section was designed to be turned into a table covering one element of the subject. Whenever possible, American College of Surgeons NSQIP "Classic Variables and Definitions" were used for forming sections and tables. However, most tables are original and specific to biliary injury. The database was populated from clinical records of patients who sustained a biliary injury during cholecystectomy. Tables were created dealing with the following subjects: demographics, index operation, presentation, classification of injury, preoperative risk assessment, preoperative laboratory values, operative repair technique, postoperative complications, and long-term outcomes. Between 1997 and 2013, 122 primary bile duct reconstructions were performed, with 1 mortality and 47 complications. Good long-term results were obtained in 113 (92.6%) patients. No secondary surgical reconstructions have been needed. Presentation of data in a standard format would facilitate comparison and synthesis of observational studies on the same subject. The biliary reconstructive methods used resulted in very satisfactory outcomes. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  14. Source fields reconstruction with 3D mapping by means of the virtual acoustic volume concept

    NASA Astrophysics Data System (ADS)

    Forget, S.; Totaro, N.; Guyader, J. L.; Schaeffer, M.

    2016-10-01

    This paper presents the theoretical framework of the virtual acoustic volume concept and two related inverse Patch Transfer Functions (iPTF) identification methods (called u-iPTF and m-iPTF depending on the chosen boundary conditions for the virtual volume). They are based on the application of Green's identity on an arbitrary closed virtual volume defined around the source. The reconstruction of sound source fields combines discrete acoustic measurements performed at accessible positions around the source with the modal behavior of the chosen virtual acoustic volume. The mode shapes of the virtual volume can be computed by a Finite Element solver to handle the geometrical complexity of the source. As a result, it is possible to identify all the acoustic source fields at the real surface of an irregularly shaped structure and irrespective of its acoustic environment. The m-iPTF method is introduced for the first time in this paper. Conversely to the already published u-iPTF method, the m-iPTF method needs only acoustic pressure and avoids particle velocity measurements. This paper is focused on its validation, both with numerical computations and by experiments on a baffled oil pan.

  15. Errors due to the truncation of the computational domain in static three-dimensional electrical impedance tomography.

    PubMed

    Vauhkonen, P J; Vauhkonen, M; Kaipio, J P

    2000-02-01

    In electrical impedance tomography (EIT), an approximation for the internal resistivity distribution is computed based on the knowledge of the injected currents and measured voltages on the surface of the body. The currents spread out in three dimensions and therefore off-plane structures have a significant effect on the reconstructed images. A question arises: how far from the current carrying electrodes should the discretized model of the object be extended? If the model is truncated too near the electrodes, errors are produced in the reconstructed images. On the other hand if the model is extended very far from the electrodes the computational time may become too long in practice. In this paper the model truncation problem is studied with the extended finite element method. Forward solutions obtained using so-called infinite elements, long finite elements and separable long finite elements are compared to the correct solution. The effects of the truncation of the computational domain on the reconstructed images are also discussed and results from the three-dimensional (3D) sensitivity analysis are given. We show that if the finite element method with ordinary elements is used in static 3D EIT, the dimension of the problem can become fairly large if the errors associated with the domain truncation are to be avoided.

  16. Time-resolved flow reconstruction with indirect measurements using regression models and Kalman-filtered POD ROM

    NASA Astrophysics Data System (ADS)

    Leroux, Romain; Chatellier, Ludovic; David, Laurent

    2018-01-01

    This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.

  17. XD-GRASP: Golden-Angle Radial MRI with Reconstruction of Extra Motion-State Dimensions Using Compressed Sensing

    PubMed Central

    Feng, Li; Axel, Leon; Chandarana, Hersh; Block, Kai Tobias; Sodickson, Daniel K.; Otazo, Ricardo

    2015-01-01

    Purpose To develop a novel framework for free-breathing MRI called XD-GRASP, which sorts dynamic data into extra motion-state dimensions using the self-navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing. Methods Radial k-space data are continuously acquired using the golden-angle sampling scheme and sorted into multiple motion-states based on respiratory and/or cardiac motion signals derived directly from the data. The resulting under-sampled multidimensional dataset is reconstructed using a compressed sensing approach that exploits sparsity along the new dynamic dimensions. The performance of XD-GRASP is demonstrated for free-breathing three-dimensional (3D) abdominal imaging, two-dimensional (2D) cardiac cine imaging and 3D dynamic contrast-enhanced (DCE) MRI of the liver, comparing against reconstructions without motion sorting in both healthy volunteers and patients. Results XD-GRASP separates respiratory motion from cardiac motion in cardiac imaging, and respiratory motion from contrast enhancement in liver DCE-MRI, which improves image quality and reduces motion-blurring artifacts. Conclusion XD-GRASP represents a new use of sparsity for motion compensation and a novel way to handle motions in the context of a continuous acquisition paradigm. Instead of removing or correcting motion, extra motion-state dimensions are reconstructed, which improves image quality and also offers new physiological information of potential clinical value. PMID:25809847

  18. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  19. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  20. A new approach for beam hardening correction based on the local spectrum distributions

    NASA Astrophysics Data System (ADS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-09-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.

  1. After Late- and Postmodernism: A Wittgensteinian Reconstructive and Transformative Aesthetics, Art Practice, and Art Education

    ERIC Educational Resources Information Center

    Cunliffe, Leslie

    2001-01-01

    Ludwig Wittgenstein's thought embraces two complementary projects: what he called his therapeutic work which was aimed at treating philosophical questions as though they were an illness, and his reconstructive work which emerges from this therapeutic endeavor. Wittgenstein describes his therapeutic work as an exercise that involves destroying…

  2. The Corpus Status of Literature in Teaching Sociology: Novels as "Sociological Reconstruction"

    ERIC Educational Resources Information Center

    Carlin, Andrew P.

    2010-01-01

    Using fiction in teaching sociology involves what Harvey Sacks calls "sociological reconstruction". Numerous comments on teaching sociology provide advice and suggestions on the use of literature and "what counts" as "sociological" literature, including specific titles. This paper goes further: while the use of literature is a routine feature of…

  3. Thirtyfour years of liposuction: past, present and future.

    PubMed

    Sterodimas, A; Boriani, F; Magarakis, E; Nicaretta, B; Pereira, L H; Illouz, Y G

    2012-03-01

    Initial, variably successful attempts of fat sculpting date back to the beginning of the 20th Century, but Gerard Illouz was the first to introduce the modern, safe, widespread method of liposuction. Preoperative injection of local anaesthesia, saline, distilled water, adrenaline and hyaluronidase, defined wet technique, established as a safe and effective adjunct to lipoaspiration. This procedure was initially based on an automatic pump system, but then the accuracy of syringe aspiration was popularized by Toledo in the eighties. Liposuction in the subcutaneous tissue, just 3-4 mm deep to dermis, also called superficial liposuction, is a modern effective evolution of the technique, but requires a good mastery in order to avoid disfiguring outcomes. Ultrasound and laser lipoplasty methods have provided further advancement in the range of technical choices offered to the plastic surgeon. Liposuction is a purely surgical procedure, and as such, carries risks of minor and major complications. In the nineties, an interplay between abdominoplasty and abdominal liposuction as simultaneous procedures, also called lipoabdominoplasty, has become more and more popular. Reinjection of the harvested fat with the purpose of liposculpture for both reconstructive and cosmetic indications is a relatively recent development which has established as a successful, world-wide accepted procedure. Adipose stem cells, extracted from the unlimited source represented by human adipose tissue, are a great promise for future tissue-engineering. Liposuction has nowadays become a safe, effective, popular procedure for contouring adipose tissue and human body in general, in many reconstructive and cosmetic indications.

  4. Anchoring quartet-based phylogenetic distances and applications to species tree reconstruction.

    PubMed

    Sayyari, Erfan; Mirarab, Siavash

    2016-11-11

    Inferring species trees from gene trees using the coalescent-based summary methods has been the subject of much attention, yet new scalable and accurate methods are needed. We introduce DISTIQUE, a new statistically consistent summary method for inferring species trees from gene trees under the coalescent model. We generalize our results to arbitrary phylogenetic inference problems; we show that two arbitrarily chosen leaves, called anchors, can be used to estimate relative distances between all other pairs of leaves by inferring relevant quartet trees. This results in a family of distance-based tree inference methods, with running times ranging between quadratic to quartic in the number of leaves. We show in simulated studies that DISTIQUE has comparable accuracy to leading coalescent-based summary methods and reduced running times.

  5. Alpha image reconstruction (AIR): A new iterative CT image reconstruction approach using voxel-wise alpha blending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, Christian; Sawall, Stefan; Knaup, Michael

    2014-06-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger themore » loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast factor for contrast-resolution plots. Furthermore, the authors calculate the contrast-to-noise ratio with the low contrast disks and the authors compare the agreement of the reconstructions with the ground truth by calculating the normalized cross-correlation and the root-mean-square deviation. To evaluate the clinical performance of the proposed method, the authors reconstruct patient data acquired with a Somatom Definition Flash dual source CT scanner (Siemens Healthcare, Forchheim, Germany). Results: The results of the simulation study show that among the compared algorithms AIR achieves the highest resolution and the highest agreement with the ground truth. Compared to the reference FBP reconstruction AIR is able to reduce the relative pixel noise by up to 50% and at the same time achieve a higher resolution by maintaining the edge information from the basis images. These results can be confirmed with the patient data. Conclusions: To evaluate the AIR algorithm simulated and measured patient data of a state-of-the-art clinical CT system were processed. It is shown, that generating CT images through the reconstruction of weighting coefficients has the potential to improve the resolution noise trade-off and thus to improve the dose usage in clinical CT.« less

  6. Testing light-traces-mass in Hubble Frontier Fields Cluster MACS-J0416.1-2403

    DOE PAGES

    Sebesta, Kevin; Williams, Liliya L. R.; Mohammed, Irshad; ...

    2016-06-17

    Here, we reconstruct the projected mass distribution of a massive merging Hubble Frontier Fields cluster MACSJ0416 using the genetic algorithm based free-form technique called Grale. The reconstructions are constrained by 149 lensed images identified by Jauzac et al. using HFF data. No information about cluster galaxies or light is used, which makes our reconstruction unique in this regard. Using visual inspection of the maps, as well as galaxy-mass correlation functions we conclude that overall light does follow mass. Furthermore, the fact that brighter galaxies are more strongly clustered with mass is an important confirmation of the standard biasing scenario inmore » galaxy clusters. On the smallest scales, approximately less than a few arcseconds, the resolution afforded by 149 images is still not sufficient to confirm or rule out galaxy-mass offsets of the kind observed in ACO 3827. We also compare the mass maps of MACSJ0416 obtained by three different groups: Grale, and two parametric Lenstool reconstructions from the CATS and Sharon/Johnson teams. Overall, the three agree well; one interesting discrepancy between Grale and Lenstool galaxy-mass correlation functions occurs on scales of tens of kpc and may suggest that cluster galaxies are more biased tracers of mass than parametric methods generally assume.« less

  7. Testing light-traces-mass in Hubble Frontier Fields Cluster MACS-J0416.1-2403

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sebesta, Kevin; Williams, Liliya L. R.; Mohammed, Irshad

    Here, we reconstruct the projected mass distribution of a massive merging Hubble Frontier Fields cluster MACSJ0416 using the genetic algorithm based free-form technique called Grale. The reconstructions are constrained by 149 lensed images identified by Jauzac et al. using HFF data. No information about cluster galaxies or light is used, which makes our reconstruction unique in this regard. Using visual inspection of the maps, as well as galaxy-mass correlation functions we conclude that overall light does follow mass. Furthermore, the fact that brighter galaxies are more strongly clustered with mass is an important confirmation of the standard biasing scenario inmore » galaxy clusters. On the smallest scales, approximately less than a few arcseconds, the resolution afforded by 149 images is still not sufficient to confirm or rule out galaxy-mass offsets of the kind observed in ACO 3827. We also compare the mass maps of MACSJ0416 obtained by three different groups: Grale, and two parametric Lenstool reconstructions from the CATS and Sharon/Johnson teams. Overall, the three agree well; one interesting discrepancy between Grale and Lenstool galaxy-mass correlation functions occurs on scales of tens of kpc and may suggest that cluster galaxies are more biased tracers of mass than parametric methods generally assume.« less

  8. Analyzation of photopolymer materials shrunken influence for thick hologram gratings

    NASA Astrophysics Data System (ADS)

    Li, Zhenzhen; Xiao, Xue; Chen, Wei; Kang, Guoguo; Huang, Yong; Tan, Xiaodi

    2016-09-01

    The photopolymer materials are good media to record thick hologram gratings, because photopolymer materials have high resolution, low cost, simple process technology and so on. According to coupled wave theory for thick hologram gratings, we know that the same object beam can be reconstructed if the same reference beam is used to retrieve a thick hologram grating. However, the shrinkage always occurs in the photopolymer materials because of environment temperature, humidity, vibration etc. For instance, the same object beam cannot be reconstructed even the same reference beam to be used. In this paper, we will analysis the shrinkage influence of photopolymer materials for thick hologram gratings. We divide the photopolymer materials into several geometry layers, and analysis the reconstructed characteristics separately basing on coupled wave theory of Kogelnik. Through gradually continuous changing the angle between gratings and the border (we call it slant angle), we can build the geometry model of gratings bending caused by shrinkage of materials. We calculate wave complex amplitude diffracted from every layer, and superpose them to compute the total diffraction efficiency. We simulate above methods to obtain the curve of diffraction efficiency with reconstruction wavelength by using Matlab software. Comparing the simulated results with the experiments results, we can deduce the probable situation of thick hologram gratings bending after photopolymer materials shrink.

  9. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  10. Leapfrog Diffusion Mechanism for One-Dimensional Chains on Missing-Row Reconstructed Surfaces

    NASA Astrophysics Data System (ADS)

    Montalenti, F.; Ferrando, R.

    1999-02-01

    We analyze the in-channel diffusion of dimers and longer n-adatom chains on Au and Pt (110) \\(1×2\\) surfaces by molecular dynamics simulations. From our calculations it arises that, on the missing-row reconstructed surface, a novel diffusion process, called leapfrog, dominates over concerted jumps, thus becoming the most frequent diffusion mechanism.

  11. RADRUE METHOD FOR RECONSTRUCTION OF EXTERNAL PHOTON DOSES TO CHERNOBYL LIQUIDATORS IN EPIDEMIOLOGICAL STUDIES

    PubMed Central

    Kryuchkov, Victor; Chumak, Vadim; Maceika, Evaldas; Anspaugh, Lynn R.; Cardis, Elisabeth; Bakhanova, Elena; Golovanov, Ivan; Drozdovitch, Vladimir; Luckyanov, Nickolas; Kesminiene, Ausrele; Voillequé, Paul; Bouville, André

    2010-01-01

    Between 1986 and 1990, several hundred thousand workers, called “liquidators” or “clean-up workers”, took part in decontamination and recovery activities within the 30-km zone around the Chernobyl nuclear power plant in Ukraine, where a major accident occurred in April 1986. The Chernobyl liquidators were mainly exposed to external ionizing radiation levels that depended primarily on their work locations and the time after the accident when the work was performed. Because individual doses were often monitored inadequately or were not monitored at all for the majority of liquidators, a new method of photon (i.e. gamma and x-rays) dose assessment, called “RADRUE” (Realistic Analytical Dose Reconstruction with Uncertainty Estimation) was developed to obtain unbiased and reasonably accurate estimates for use in three epidemiologic studies of hematological malignancies and thyroid cancer among liquidators. The RADRUE program implements a time-and-motion dose reconstruction method that is flexible and conceptually easy to understand. It includes a large exposure rate database and interpolation and extrapolation techniques to calculate exposure rates at places where liquidators lived and worked within ~70 km of the destroyed reactor. The RADRUE technique relies on data collected from subjects’ interviews conducted by trained interviewers, and on expert dosimetrists to interpret the information and provide supplementary information, when necessary, based upon their own Chernobyl experience. The RADRUE technique was used to estimate doses from external irradiation, as well as uncertainties, to the bone-marrow for 929 subjects and to the thyroid gland for 530 subjects enrolled in epidemiologic studies. Individual bone-marrow dose estimates were found to range from less than one μGy to 3,300 mGy, with an arithmetic mean of 71 mGy. Individual thyroid dose estimates were lower and ranged from 20 μGy to 507 mGy, with an arithmetic mean of 29 mGy. The uncertainties, expressed in terms of geometric standard deviations, ranged from 1.1 to 5.8, with an arithmetic mean of 1.9. PMID:19741357

  12. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    NASA Astrophysics Data System (ADS)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  13. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  14. How Are We Measuring Patient Satisfaction After Anterior Cruciate Ligament Reconstruction?

    PubMed Central

    Kahlenberg, Cynthia A.; Nwachukwu, Benedict U.; Ferraro, Richard A.; Schairer, William W.; Steinhaus, Michael E.; Allen, Answorth A.

    2016-01-01

    Background: Reconstruction of the anterior cruciate ligament (ACL) is one of the most common orthopaedic operations in the United States. The long-term impact of ACL reconstruction is controversial, however, as longer term data have failed to demonstrate that ACL reconstruction helps alter the natural history of early onset osteoarthritis that occurs after ACL injury. There is significant interest in evaluating the value of ACL reconstruction surgeries. Purpose: To examine the quality of patient satisfaction reporting after ACL reconstruction surgery. Study Design: Systematic review; Level of evidence, 4. Methods: A systematic review of the MEDLINE database was performed using the PubMed interface. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines as well as the PRISMA checklist were employed. The initial search yielded 267 studies. The inclusion criteria were: English language, US patient population, clinical outcome study of ACL reconstruction surgery, and reporting of patient satisfaction included in the study. Study quality was assessed using the Newcastle-Ottawa scale. Results: A total of 22 studies met the inclusion criteria. These studies comprised a total of 1984 patients with a mean age of 31.9 years at the time of surgery and a mean follow-up period of 59.3 months. The majority of studies were evidence level 4 (n = 18; 81.8%), had a mean Newcastle-Ottawa scale score of 5.5, and were published before 2006 (n = 17; 77.3%); 5 studies (22.7%) failed to clearly describe their method for determining patient satisfaction. The most commonly used method for assessing satisfaction was a 0 to 10 satisfaction scale (n = 11; 50.0%). Among studies using a 0 to 10 scale, mean satisfaction ranged from 7.4 to 10.0. Patient-reported outcome and objective functional measures for ACL stability and knee function were positively correlated with patient satisfaction. Degenerative knee change was negatively correlated with satisfaction. Conclusion: The level of evidence for studies reporting patient satisfaction is low, and the methodologies for reporting patient satisfaction are variable. Additionally, within the past decade there has been a significant decline in the inclusion of this outcome measure within published ACL studies. As sports surgeons are increasingly called on to demonstrate the value of operative procedures, attention should be paid to understanding and reporting patient satisfaction. PMID:28203583

  15. Simulation of a fast diffuse optical tomography system based on radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Motevalli, S. M.; Payani, A.

    2016-12-01

    Studies show that near-infrared (NIR) light (light with wavelength between 700nm and 1300nm) undergoes two interactions, absorption and scattering, when it penetrates a tissue. Since scattering is the predominant interaction, the calculation of light distribution in the tissue and the image reconstruction of absorption and scattering coefficients are very complicated. Some analytical and numerical methods, such as radiative transport equation and Monte Carlo method, have been used for the simulation of light penetration in tissue. Recently, some investigators in the world have tried to develop a diffuse optical tomography system. In these systems, NIR light penetrates the tissue and passes through the tissue. Then, light exiting the tissue is measured by NIR detectors placed around the tissue. These data are collected from all the detectors and transferred to the computational parts (including hardware and software), which make a cross-sectional image of the tissue after performing some computational processes. In this paper, the results of the simulation of an optical diffuse tomography system are presented. This simulation involves two stages: a) Simulation of the forward problem (or light penetration in the tissue), which is performed by solving the diffusion approximation equation in the stationary state using FEM. b) Simulation of the inverse problem (or image reconstruction), which is performed by the optimization algorithm called Broyden quasi-Newton. This method of image reconstruction is faster compared to the other Newton-based optimization algorithms, such as the Levenberg-Marquardt one.

  16. [Technologies for hair reconstruction and their applicability for pharmaceutical research].

    PubMed

    Matsuzaki, Takashi

    2008-01-01

    Hair follicles are the organs that produce hair shafts. They periodically regenerate throughout the life of the organisms, which is called the hair cycle. To develop new drugs to treat hair disorders and diseases, reproducible and high throughput assays or screening methods have been required to estimate the efficacy of various factors on hair follicle function. Although organ culture of hair follicles is one of the useful ways to carry out such research, it is not suitable for manipulating the genes or cells present in hair follicles. Patch assay is a method used to reconstruct hair follicles from enzymatically dissociated skin cells and has many advantages compared to the conventional Chamber method. Using the Patch method, transferring genes into follicular cells becomes easier than ever before. Chimeric follicles could be produced with dissociated cells by modifying the combination of cells or by simply merging cells of different origins. These applications certainly help the progress of hair research. However, we recently found that some functions of dermal papillae and follicular epithelia change during the growing phase (anagen) of the hair cycle. Dermal papillae produce different factors in early anagen and mid anagen. The signals from dermal papillae in early anagen could produce hair bulbs with clonogenic epithelial precursors but not with dormant epithelial precursors. On the other hand, the signals from dermal papillae in mid anagen strongly promote hair formation with dormant epithelial precursors. Therefore, more attention should be given to the hair cycle stages when using organ culture of hair follicles and conducting reconstruction experiments with follicularly derived cells.

  17. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  18. Stereolithographic Surgical Template: A Review

    PubMed Central

    Dandekeri, Shilpa Sudesh; Sowmya, M.K.; Bhandary, Shruthi

    2013-01-01

    Implant placement has become a routine modality of dental care.Improvements in surgical reconstructive methods as well as increased prosthetic demands,require a highly accurate diagnosis, planning and placement. Recently,computer-aided design and manufacturing have made it possible to use data from computerised tomography to not only plan implant rehabilitation,but also transfer this information to the surgery.A review on one of this technique called Stereolithography is presented in this article.It permits graphic and complex 3D implant placement and fabrication of stereolithographic surgical templates. Also offers many significant benefits over traditional procedures. PMID:24179955

  19. Bayesian reconstruction of transmission within outbreaks using genomic variants.

    PubMed

    De Maio, Nicola; Worby, Colin J; Wilson, Daniel J; Stoesser, Nicole

    2018-04-01

    Pathogen genome sequencing can reveal details of transmission histories and is a powerful tool in the fight against infectious disease. In particular, within-host pathogen genomic variants identified through heterozygous nucleotide base calls are a potential source of information to identify linked cases and infer direction and time of transmission. However, using such data effectively to model disease transmission presents a number of challenges, including differentiating genuine variants from those observed due to sequencing error, as well as the specification of a realistic model for within-host pathogen population dynamics. Here we propose a new Bayesian approach to transmission inference, BadTrIP (BAyesian epiDemiological TRansmission Inference from Polymorphisms), that explicitly models evolution of pathogen populations in an outbreak, transmission (including transmission bottlenecks), and sequencing error. BadTrIP enables the inference of host-to-host transmission from pathogen sequencing data and epidemiological data. By assuming that genomic variants are unlinked, our method does not require the computationally intensive and unreliable reconstruction of individual haplotypes. Using simulations we show that BadTrIP is robust in most scenarios and can accurately infer transmission events by efficiently combining information from genetic and epidemiological sources; thanks to its realistic model of pathogen evolution and the inclusion of epidemiological data, BadTrIP is also more accurate than existing approaches. BadTrIP is distributed as an open source package (https://bitbucket.org/nicofmay/badtrip) for the phylogenetic software BEAST2. We apply our method to reconstruct transmission history at the early stages of the 2014 Ebola outbreak, showcasing the power of within-host genomic variants to reconstruct transmission events.

  20. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  1. New approach to wireless data communication in a propagation environment

    NASA Astrophysics Data System (ADS)

    Hunek, Wojciech P.; Majewski, Paweł

    2017-10-01

    This paper presents a new idea of perfect signal reconstruction in multivariable wireless communications systems including a different number of transmitting and receiving antennas. The proposed approach is based on the polynomial matrix S-inverse associated with Smith factorization. Crucially, the above mentioned inverse implements the so-called degrees of freedom. It has been confirmed by simulation study that the degrees of freedom allow to minimalize the negative impact of the propagation environment in terms of increasing the robustness of whole signal reconstruction process. Now, the parasitic drawbacks in form of dynamic ISI and ICI effects can be eliminated in framework described by polynomial calculus. Therefore, the new method guarantees not only reducing the financial impact but, more importantly, provides potentially the lower consumption energy systems than other classical ones. In order to show the potential of new approach, the simulation studies were performed by author's simulator based on well-known OFDM technique.

  2. Neural-network quantum state tomography

    NASA Astrophysics Data System (ADS)

    Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe

    2018-05-01

    The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.

  3. Equilibrium Reconstruction on the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samuel A. Lazerson, D. Gates, D. Monticello, H. Neilson, N. Pomphrey, A. Reiman S. Sakakibara, and Y. Suzuki

    Equilibrium reconstruction is commonly applied to axisymmetric toroidal devices. Recent advances in computational power and equilibrium codes have allowed for reconstructions of three-dimensional fields in stellarators and heliotrons. We present the first reconstructions of finite beta discharges in the Large Helical Device (LHD). The plasma boundary and magnetic axis are constrained by the pressure profile from Thomson scattering. This results in a calculation of plasma beta without a-priori assumptions of the equipartition of energy between species. Saddle loop arrays place additional constraints on the equilibrium. These reconstruction utilize STELLOPT, which calls VMEC. The VMEC equilibrium code assumes good nested fluxmore » surfaces. Reconstructed magnetic fields are fed into the PIES code which relaxes this constraint allowing for the examination of the effect of islands and stochastic regions on the magnetic measurements.« less

  4. A boostrap algorithm for temporal signal reconstruction in the presence of noise from its fractional Fourier transformed intensity spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Cheng-Yang; /Fermilab

    2011-02-01

    A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROGmore » (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chifu, Iulia; Wiegelmann, Thomas; Inhester, Bernd, E-mail: chifu@mps.mpg.de

    Insights into the 3D structure of the solar coronal magnetic field have been obtained in the past by two completely different approaches. The first approach are nonlinear force-free field (NLFFF) extrapolations, which use photospheric vector magnetograms as boundary condition. The second approach uses stereoscopy of coronal magnetic loops observed in EUV coronal images from different vantage points. Both approaches have their strengths and weaknesses. Extrapolation methods are sensitive to noise and inconsistencies in the boundary data, and the accuracy of stereoscopy is affected by the ability of identifying the same structure in different images and by the separation angle betweenmore » the view directions. As a consequence, for the same observational data, the 3D coronal magnetic fields computed with the two methods do not necessarily coincide. In an earlier work (Paper I) we extended our NLFFF optimization code by including stereoscopic constrains. The method was successfully tested with synthetic data, and within this work, we apply the newly developed code to a combined data set from SDO /HMI, SDO /AIA, and the two STEREO spacecraft. The extended method (called S-NLFFF) contains an additional term that monitors and minimizes the angle between the local magnetic field direction and the orientation of the 3D coronal loops reconstructed by stereoscopy. We find that when we prescribe the shape of the 3D stereoscopically reconstructed loops, the S-NLFFF method leads to a much better agreement between the modeled field and the stereoscopically reconstructed loops. We also find an appreciable decrease by a factor of two in the angle between the current and the magnetic field. This indicates the improved quality of the force-free solution obtained by S-NLFFF.« less

  6. Nonlinear Force-free Coronal Magnetic Stereoscopy

    NASA Astrophysics Data System (ADS)

    Chifu, Iulia; Wiegelmann, Thomas; Inhester, Bernd

    2017-03-01

    Insights into the 3D structure of the solar coronal magnetic field have been obtained in the past by two completely different approaches. The first approach are nonlinear force-free field (NLFFF) extrapolations, which use photospheric vector magnetograms as boundary condition. The second approach uses stereoscopy of coronal magnetic loops observed in EUV coronal images from different vantage points. Both approaches have their strengths and weaknesses. Extrapolation methods are sensitive to noise and inconsistencies in the boundary data, and the accuracy of stereoscopy is affected by the ability of identifying the same structure in different images and by the separation angle between the view directions. As a consequence, for the same observational data, the 3D coronal magnetic fields computed with the two methods do not necessarily coincide. In an earlier work (Paper I) we extended our NLFFF optimization code by including stereoscopic constrains. The method was successfully tested with synthetic data, and within this work, we apply the newly developed code to a combined data set from SDO/HMI, SDO/AIA, and the two STEREO spacecraft. The extended method (called S-NLFFF) contains an additional term that monitors and minimizes the angle between the local magnetic field direction and the orientation of the 3D coronal loops reconstructed by stereoscopy. We find that when we prescribe the shape of the 3D stereoscopically reconstructed loops, the S-NLFFF method leads to a much better agreement between the modeled field and the stereoscopically reconstructed loops. We also find an appreciable decrease by a factor of two in the angle between the current and the magnetic field. This indicates the improved quality of the force-free solution obtained by S-NLFFF.

  7. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    NASA Astrophysics Data System (ADS)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  8. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  9. Reconstructing spatial organizations of chromosomes through manifold learning

    PubMed Central

    Deng, Wenxuan; Hu, Hailin; Ma, Rui; Zhang, Sai; Yang, Jinglin; Peng, Jian; Kaplan, Tommy; Zeng, Jianyang

    2018-01-01

    Abstract Decoding the spatial organizations of chromosomes has crucial implications for studying eukaryotic gene regulation. Recently, chromosomal conformation capture based technologies, such as Hi-C, have been widely used to uncover the interaction frequencies of genomic loci in a high-throughput and genome-wide manner and provide new insights into the folding of three-dimensional (3D) genome structure. In this paper, we develop a novel manifold learning based framework, called GEM (Genomic organization reconstructor based on conformational Energy and Manifold learning), to reconstruct the three-dimensional organizations of chromosomes by integrating Hi-C data with biophysical feasibility. Unlike previous methods, which explicitly assume specific relationships between Hi-C interaction frequencies and spatial distances, our model directly embeds the neighboring affinities from Hi-C space into 3D Euclidean space. Extensive validations demonstrated that GEM not only greatly outperformed other state-of-art modeling methods but also provided a physically and physiologically valid 3D representations of the organizations of chromosomes. Furthermore, we for the first time apply the modeled chromatin structures to recover long-range genomic interactions missing from original Hi-C data. PMID:29408992

  10. Effect of Intercalated Water on Potassium Ion Transport through Kv1.2 Channels Studied via On-the-Fly Free-Energy Parametrization.

    PubMed

    Paz, S Alexis; Maragliano, Luca; Abrams, Cameron F

    2018-05-08

    We introduce a two-dimensional version of the method called on-the-fly free energy parametrization (OTFP) to reconstruct free-energy surfaces using Molecular Dynamics simulations, which we name OTFP-2D. We first test the new method by reconstructing the well-known dihedral angles free energy surface of solvated alanine dipeptide. Then, we use it to investigate the process of K + ions translocation inside the Kv1.2 channel. By comparing a series of two-dimensional free energy surfaces for ion movement calculated with different conditions on the intercalated water molecules, we first recapitulate the widely accepted knock-on mechanism for ion translocation and then confirm that permeation occurs with water molecules alternated among the ions, in accordance with the latest experimental findings. From a methodological standpoint, our new OTFP-2D algorithm demonstrates the excellent sampling acceleration of temperature-accelerated molecular dynamics and the ability to efficiently compute 2D free-energy surfaces. It will therefore be useful in large variety complex biomacromolecular simulations.

  11. High-resolution three-dimensional structural microscopy by single-angle Bragg ptychography

    DOE PAGES

    Hruszkewycz, S. O.; Allain, M.; Holt, M. V.; ...

    2016-11-21

    Coherent X-ray microscopy by phase retrieval of Bragg diffraction intensities enables lattice distortions within a crystal to be imaged at nanometre-scale spatial resolutions in three dimensions. While this capability can be used to resolve structure–property relationships at the nanoscale under working conditions, strict data measurement requirements can limit the application of current approaches. Here, in this work, we introduce an efficient method of imaging three-dimensional (3D) nanoscale lattice behaviour and strain fields in crystalline materials with a methodology that we call 3D Bragg projection ptychography (3DBPP). This method enables 3D image reconstruction of a crystal volume from a series ofmore » two-dimensional X-ray Bragg coherent intensity diffraction patterns measured at a single incident beam angle. Structural information about the sample is encoded along two reciprocal-space directions normal to the Bragg diffracted exit beam, and along the third dimension in real space by the scanning beam. Finally, we present our approach with an analytical derivation, a numerical demonstration, and an experimental reconstruction of lattice distortions in a component of a nanoelectronic prototype device.« less

  12. Reconstructing spatial organizations of chromosomes through manifold learning.

    PubMed

    Zhu, Guangxiang; Deng, Wenxuan; Hu, Hailin; Ma, Rui; Zhang, Sai; Yang, Jinglin; Peng, Jian; Kaplan, Tommy; Zeng, Jianyang

    2018-05-04

    Decoding the spatial organizations of chromosomes has crucial implications for studying eukaryotic gene regulation. Recently, chromosomal conformation capture based technologies, such as Hi-C, have been widely used to uncover the interaction frequencies of genomic loci in a high-throughput and genome-wide manner and provide new insights into the folding of three-dimensional (3D) genome structure. In this paper, we develop a novel manifold learning based framework, called GEM (Genomic organization reconstructor based on conformational Energy and Manifold learning), to reconstruct the three-dimensional organizations of chromosomes by integrating Hi-C data with biophysical feasibility. Unlike previous methods, which explicitly assume specific relationships between Hi-C interaction frequencies and spatial distances, our model directly embeds the neighboring affinities from Hi-C space into 3D Euclidean space. Extensive validations demonstrated that GEM not only greatly outperformed other state-of-art modeling methods but also provided a physically and physiologically valid 3D representations of the organizations of chromosomes. Furthermore, we for the first time apply the modeled chromatin structures to recover long-range genomic interactions missing from original Hi-C data.

  13. Measurement of the CP-Violation Parameter sin2Φ₁ with a New Tagging Method at the Υ(5S) Resonance

    DOE PAGES

    Sato, Y.; Yamamoto, H.; Aihara, H.; ...

    2012-04-23

    We report a measurement of the CP-violation parameter sin2Φ₁ at the Υ(5S) resonance using a new tagging method, called “B- π tagging.” In Υ(5S) decays containing a neutral B meson, a charged B, and a charged pion, the neutral B is reconstructed in the J/ψK 0 SCP-eigenstate decay channel. The initial flavor of the neutral B meson at the moment of the Υ(5S) decay is opposite to that of the charged B and may thus be inferred from the charge of the pion without reconstructing the charged B. From the asymmetry between B- π⁺ and B π⁻ tagged J/ψK 0more » S yields, we determine sin2Φ₁=0.57±0.58(stat)±0.06(syst). The results are based on 121 fb⁻¹ of data recorded by the Belle detector at the KEKB e⁺e⁻ collider.« less

  14. Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation

    NASA Astrophysics Data System (ADS)

    Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra

    2017-12-01

    Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.

  15. Chroma intra prediction based on inter-channel correlation for HEVC.

    PubMed

    Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C

    2014-01-01

    In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.

  16. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE PAGES

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    2018-06-07

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  17. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  18. Three-dimensional refractive index and fluorescence tomography using structured illumination (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, GwangSik; Shin, SeungWoo; Kim, Kyoohyun; Park, YongKeun

    2017-02-01

    Optical diffraction tomography (ODT) has been an emerging optical technique for label-free imaging of three-dimensional (3-D) refractive index (RI) distribution of biological samples. ODT employs interferometric microscopy for measuring multiple holograms of samples with various incident angles, from which the Fourier diffraction theorem reconstructs the 3-D RI distribution of samples from retrieved complex optical fields. Since the RI value is linearly proportional to the protein concentration of biological samples where the proportional coefficient is called as refractive index increment (RII), reconstructed 3-D RI tomograms provide precise structural and biochemical information of individual biological samples. Because most proteins have similar RII value, however, ODT has limited molecular specificity, especially for imaging eukaryotic cells having various types of proteins and subcellular organelles. Here, we present an ODT system combined with structured illumination microscopy which can measure the 3-D RI distribution of biological samples as well as 3-D super-resolution fluorescent images in the same optical setup. A digital micromirror device (DMD) controls the incident angle of the illumination beam for tomogram reconstruction, and the same DMD modulates the structured illumination pattern of the excitation beam for super-resolution fluorescent imaging. We first validate the proposed method for simultaneous optical diffraction tomographic imaging and super-resolution fluorescent imaging of fluorescent beads. The proposed method is also exploited for various biological samples.

  19. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  20. Spatial-temporal forecasting the sunspot diagram

    NASA Astrophysics Data System (ADS)

    Covas, Eurico

    2017-09-01

    Aims: We attempt to forecast the Sun's sunspot butterfly diagram in both space (I.e. in latitude) and time, instead of the usual one-dimensional time series forecasts prevalent in the scientific literature. Methods: We use a prediction method based on the non-linear embedding of data series in high dimensions. We use this method to forecast both in latitude (space) and in time, using a full spatial-temporal series of the sunspot diagram from 1874 to 2015. Results: The analysis of the results shows that it is indeed possible to reconstruct the overall shape and amplitude of the spatial-temporal pattern of sunspots, but that the method in its current form does not have real predictive power. We also apply a metric called structural similarity to compare the forecasted and the observed butterfly cycles, showing that this metric can be a useful addition to the usual root mean square error metric when analysing the efficiency of different prediction methods. Conclusions: We conclude that it is in principle possible to reconstruct the full sunspot butterfly diagram for at least one cycle using this approach and that this method and others should be explored since just looking at metrics such as sunspot count number or sunspot total area coverage is too reductive given the spatial-temporal dynamical complexity of the sunspot butterfly diagram. However, more data and/or an improved approach is probably necessary to have true predictive power.

  1. Bayesian reconstruction of gravitational wave bursts using chirplets

    NASA Astrophysics Data System (ADS)

    Millhouse, Margaret; Cornish, Neil; Littenberg, Tyson

    2017-01-01

    The BayesWave algorithm has been shown to accurately reconstruct unmodeled short duration gravitational wave bursts and to distinguish between astrophysical signals and transient noise events. BayesWave does this by using a variable number of sine-Gaussian (Morlet) wavelets to reconstruct data in multiple interferometers. While the Morlet wavelets can be summed together to produce any possible waveform, there could be other wavelet functions that improve the performance. Because we expect most astrophysical gravitational wave signals to evolve in frequency, modified Morlet wavelets with linear frequency evolution - called chirplets - may better reconstruct signals with fewer wavelets. We compare the performance of BayesWave using Morlet wavelets and chirplets on a variety of simulated signals.

  2. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  3. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  5. Phylogenomic reconstruction supports supercontinent origins for Leishmania.

    PubMed

    Harkins, Kelly M; Schwartz, Rachel S; Cartwright, Reed A; Stone, Anne C

    2016-03-01

    Leishmania, a genus of parasites transmitted to human hosts and mammalian/reptilian reservoirs by an insect vector, is the causative agent of the human disease complex leishmaniasis. The evolutionary relationships within the genus Leishmania and its origins are the source of ongoing debate, reflected in conflicting phylogenetic and biogeographic reconstructions. This study employs a recently described bioinformatics method, SISRS, to identify over 200,000 informative sites across the genome from newly sequenced and publicly available Leishmania data. This dataset is used to reconstruct the evolutionary relationships of this genus. Additionally, we constructed a large multi-gene dataset, using it to reconstruct the phylogeny and estimate divergence dates for species. We conclude that the genus Leishmania evolved at least 90-100 million years ago, supporting a modified version of the Multiple Origins hypothesis that we call the Supercontinent hypothesis. According to this scenario, separate Leishmania clades emerged prior to, and during, the breakup of Gondwana. Additionally, we confirm that reptile-infecting Leishmania are derived from mammalian forms and that the species that infect porcupines and sloths form a clade long separated from other species. Finally, we firmly place the guinea-pig infecting species, Leishmaniaenriettii, the globally dispersed Leishmaniasiamensis, and the newly identified Australian species from a kangaroo, as sibling species whose distribution arises from the ancient connection between Australia, Antarctica, and South America. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, C; Zhang, H; Chen, Y

    Purpose: Recently, compressed sensing (CS) based iterative reconstruction (IR) method is receiving attentions to reconstruct high quality cone beam computed tomography (CBCT) images using sparsely sampled or noisy projections. The aim of this study is to develop a novel baseline algorithm called Mask Guided Image Reconstruction (MGIR), which can provide superior image quality for both low-dose 3DCBCT and 4DCBCT under single mathematical framework. Methods: In MGIR, the unknown CBCT volume was mathematically modeled as a combination of two regions where anatomical structures are 1) within the priori-defined mask and 2) outside the mask. Then we update each part of imagesmore » alternatively thorough solving minimization problems based on CS type IR. For low-dose 3DCBCT, the former region is defined as the anatomically complex region where it is focused to preserve edge information while latter region is defined as contrast uniform, and hence aggressively updated to remove noise/artifact. In 4DCBCT, the regions are separated as the common static part and moving part. Then, static volume and moving volumes were updated with global and phase sorted projection respectively, to optimize the image quality of both moving and static part simultaneously. Results: Examination of MGIR algorithm showed that high quality of both low-dose 3DCBCT and 4DCBCT images can be reconstructed without compromising the image resolution and imaging dose or scanning time respectively. For low-dose 3DCBCT, a clinical viable and high resolution head-and-neck image can be obtained while cutting the dose by 83%. In 4DCBCT, excellent quality 4DCBCT images could be reconstructed while requiring no more projection data and imaging dose than a typical clinical 3DCBCT scan. Conclusion: The results shown that the image quality of MGIR was superior compared to other published CS based IR algorithms for both 4DCBCT and low-dose 3DCBCT. This makes our MGIR algorithm potentially useful in various on-line clinical applications. Provisional Patent: UF#15476; WGS Ref. No. U1198.70067US00.« less

  8. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    PubMed

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed successfully with identical results even when half of the nodes were manually terminated in the middle of the process. An ultrafast, reliable and scalable 4D CBCT∕CT reconstruction method was developed using the MapReduce framework. Unlike other parallel computing approaches, the parallelization and speedup required little modification of the original reconstruction code. MapReduce provides an efficient and fault tolerant means of solving large-scale computing problems in a cloud computing environment.

  9. Three Dimensional Reconstruction Workflows for Lost Cultural Heritage Monuments Exploiting Public Domain and Professional Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.

    2017-08-01

    In our paper, we document experiments and results of image-based 3d reconstructions of famous heritage monuments which were recently damaged or completely destroyed by the so-called Islamic state in Syria and Iraq. The specific focus of our research is on the combined use of professional photogrammetric imagery and of publicly available imagery from the web for optimally 3d reconstructing those monuments. The investigated photogrammetric reconstruction techniques include automated bundle adjustment and dense multi-view 3d reconstruction using public domain and professional imagery on the one hand and an interactive polygonal modelling based on projected panoramas on the other. Our investigations show that the combination of these two image-based modelling techniques delivers better results in terms of model completeness, level of detail and appearance.

  10. Visualization of Skin Perfusion by Indocyanine Green Fluorescence Angiography—A Feasibility Study

    PubMed Central

    Steinbacher, Johannes; Yoshimatsu, Hidehiko; Meng, Stefan; Hamscha, Ulrike M.; Chan, Chun-Sheng; Weninger, Wolfgang J.; Wu, Chieh-Tsai; Cheng, Ming-Huei

    2017-01-01

    Summary: Plastic and reconstructive surgery relies on the knowledge of angiosomes in the raising of microsurgical flaps. Growing interest in muscle-sparing perforator flaps calls for reliable methods to assess the clinical feasibility of new donor sites in anatomical studies. Several injection techniques are known for the evaluation of vascular territories. Indocyanine green–based fluorescence angiography has found wide application in the clinical assessment of tissue perfusion. In this article, the use of indocyanine green–based fluorescence angiography for the assessment of perforasomes in anatomical studies is described for the first time. PMID:29062637

  11. Augmented Performance Environment for Enhancing Interagency Coordination in Stability, Security, Transition, and Reconstruction (SSTR) Operations

    DTIC Science & Technology

    2009-02-01

    assessments and meeting rehearsal and individual learning materials • Specify the metrics to be used to capture the quality of interagency...government; • Improve security; and • Promote reconstruction (Barno, 2004; Dziedzic & Siedl, 2005; Center for Army Lessons Learned (CALL), 2007). The...their orientations and creating group-level, hierarchical orientations out of the aggregated individual orientations (Wan, Chiu, Peng, & Tam , 2007

  12. Charged-particle emission tomography.

    PubMed

    Ding, Yijun; Caucci, Luca; Barrett, Harrison H

    2017-06-01

    Conventional charged-particle imaging techniques - such as autoradiography - provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. Standardization of Disposable Instruments in Microvascular Breast Reconstruction: A Case Study in Cost Reduction.

    PubMed

    Still, Brady R; Christianson, Laura W; Mhlaba, Julie M; O'Malley, Ian P; Song, David H; Langerman, Alexander J

    2017-02-01

    Background  A key avoidable expense in the surgical setting is the wastage of disposable surgical items, which are discarded after cases even if they go unused. A major contributor to wastage of these items is the inaccuracy of surgeon preference cards, which are rarely examined or updated. The authors report the application of a novel technique called cost heatmapping to facilitate standardization of preference cards for microvascular breast reconstruction. Methods  Preference card data were obtained for all surgeons performing microvascular breast reconstruction at the authors' institution. These data were visualized using the heatmap.2 function in the gplot package for R. The resulting cost heatmaps were shown to all surgeons performing microvascular breast reconstruction at our institution; each surgeon was asked to classify the items on the heatmap as "always needed," "sometimes needed," or "never needed." This feedback was used to generate a lean standardized preference card for all surgeons. This card was validated by all surgeons performing the case and by nursing leadership familiar with the supply needs of microvascular breast reconstruction before implementation. Cost savings associated with implementation were calculated. Results  Implementation of the preference card changes will lead to an estimated per annum savings of $17,981.20 and a per annum reduction in individual items listed on preference cards of 1,693 items. Conclusion  Cost heatmapping is a powerful tool for increasing surgeon awareness of cost and for facilitating comparison and standardization of surgeon preference cards. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Characterization of photon-counting multislit breast tomosynthesis.

    PubMed

    Berggren, Karl; Cederström, Björn; Lundqvist, Mats; Fredenberg, Erik

    2018-02-01

    It has been shown that breast tomosynthesis may improve sensitivity and specificity compared to two-dimensional mammography, resulting in increased detection-rate of cancers or lowered call-back rates. The purpose of this study is to characterize a spectral photon-counting multislit breast tomosynthesis system that is able to do single-scan spectral imaging with multiple collimated x-ray beams. The system differs in many aspects compared to conventional tomosynthesis using energy-integrating flat-panel detectors. The investigated system was a prototype consisting of a dual-threshold photon-counting detector with 21 collimated line detectors scanning across the compressed breast. A review of the system is done in terms of detector, acquisition geometry, and reconstruction methods. Three reconstruction methods were used, simple back-projection, filtered back-projection and an iterative algebraic reconstruction technique. The image quality was evaluated by measuring the modulation transfer-function (MTF), normalized noise-power spectrum, detective quantum-efficiency (DQE), and artifact spread-function (ASF) on reconstructed spectral tomosynthesis images for a total-energy bin (defined by a low-energy threshold calibrated to remove electronic noise) and for a high-energy bin (with a threshold calibrated to split the spectrum in roughly equal parts). Acquisition was performed using a 29 kVp W/Al x-ray spectrum at a 0.24 mGy exposure. The difference in MTF between the two energy bins was negligible, that is, there was no energy dependence on resolution. The MTF dropped to 50% at 1.5 lp/mm to 2.3 lp/mm in the scan direction and 2.4 lp/mm to 3.3 lp/mm in the slit direction, depending on the reconstruction method. The full width at half maximum of the ASF was found to range from 13.8 mm to 18.0 mm for the different reconstruction methods. The zero-frequency DQE of the system was found to be 0.72. The fraction of counts in the high-energy bin was measured to be 59% of the total detected spectrum. Scantimes ranged from 4 s to 16.5 s depending on voltage and current settings. The characterized system generates spectral tomosynthesis images with a dual-energy photon-counting detector. Measurements show a high DQE, enabling high image quality at a low dose, which is beneficial for low-dose applications such as screening. The single-scan spectral images open up for applications such as quantitative material decomposition and contrast-enhanced tomosynthesis. © 2017 American Association of Physicists in Medicine.

  15. Theory and Application of Auger and Photoelectron Diffraction and Holography

    NASA Astrophysics Data System (ADS)

    Chen, Xiang

    This dissertation addresses the theories and applications of three important surface analysis techniques: Auger electron diffraction (AED), x-ray photoelectron diffraction (XPD), and Auger and photoelectron holography. A full multiple-scattering scheme for the calculations of XPD, AED, and Kikuchi electron diffraction pattern from a surface cluster is described. It is used to simulate 64 eV M_{2,3}VV and 913 eV L_3VV AED patterns from Cu(001) surfaces, in order to test assertions in the literature that they are explicable by a classical "blocking" and channeling model. We find that this contention is not valid, and that only a quantum mechanical multiple-scattering calculation is able to simulate these patterns well. The same multiple scattering simulation scheme is also used to investigate the anomalous phenomena of peak shifts off the forward-scattering directions in photo -electron diffraction patterns of Mg KLL (1180 eV) and O 1s (955 eV) from MgO(001) surfaces. These shifts are explained by calculations assuming a short electron mean free path. Similar simulations of XPD from a CoSi_2(111) surface for Co-3p and Si-2p normal emission agree well with experimental diffraction patterns. A filtering process aimed at eliminating the self -interference effect in photoelectron holography is developed. A better reconstructed image from Si-2p XPD from a Si(001) (2 times 1) surface is seen at atomic resolution. A reconstruction algorithm which corrects for the anisotropic emitter waves as well as the anisotropic atomic scattering factors is used for holographic reconstruction from a Co-3p XPD pattern from a CoSi_2 surface. This new algorithm considerably improves the reconstructed image. Finally, a new reconstruction algorithm called "atomic position recovery by iterative optimization of reconstructed intensities" (APRIORI), which takes account of the self-interference terms omitted by the other holographic algorithms, is developed. Tests on a Ni-C-O chain and Si(111)(sqrt{3} times sqrt{3})B surface suggest that this new method may overcome the twin image problem in the traditional holographic methods, reduce the artifacts in real space, and even separately identify the chemical species of the scatterers.

  16. Ab initio nanostructure determination

    NASA Astrophysics Data System (ADS)

    Gujarathi, Saurabh

    Reconstruction of complex structures is an inverse problem arising in virtually all areas of science and technology, from protein structure determination to bulk heterostructure solar cells and the structure of nanoparticles. This problem is cast as a complex network problem where the edges in a network have weights equal to the Euclidean distance between their endpoints. A method, called Tribond, for the reconstruction of the locations of the nodes of the network given only the edge weights of the Euclidean network is presented. The timing results indicate that the algorithm is a low order polynomial in the number of nodes in the network in two dimensions. Reconstruction of Euclidean networks in two dimensions of about one thousand nodes in approximately twenty four hours on a desktop computer using this implementation is done. In three dimensions, the computational cost for the reconstruction is a higher order polynomial in the number of nodes and reconstruction of small Euclidean networks in three dimensions is shown. If a starting network of size five is assumed to be given, then for a network of size 100, the remaining reconstruction can be done in about two hours on a desktop computer. In situations when we have less precise data, modifications of the method may be necessary and are discussed. A related problem in one dimension known as the Optimal Golomb ruler (OGR) is also studied. A statistical physics Hamiltonian to describe the OGR problem is introduced and the first order phase transition from a symmetric low constraint phase to a complex symmetry broken phase at high constraint is studied. Despite the fact that the Hamiltonian is not disordered, the asymmetric phase is highly irregular with geometric frustration. The phase diagram is obtained and it is seen that even at a very low temperature T there is a phase transition at finite and non-zero value of the constraint parameter gamma/mu. Analytic calculations for the scaling of the density and free energy of the ruler are done and they are compared with those from the mean field approach. A scaling law is also derived for the length of OGR, which is consistent with Erdos conjecture and with numerical results.

  17. Fast-SG: an alignment-free algorithm for hybrid assembly.

    PubMed

    Di Genova, Alex; Ruz, Gonzalo A; Sagot, Marie-France; Maass, Alejandro

    2018-05-01

    Long-read sequencing technologies are the ultimate solution for genome repeats, allowing near reference-level reconstructions of large genomes. However, long-read de novo assembly pipelines are computationally intense and require a considerable amount of coverage, thereby hindering their broad application to the assembly of large genomes. Alternatively, hybrid assembly methods that combine short- and long-read sequencing technologies can reduce the time and cost required to produce de novo assemblies of large genomes. Here, we propose a new method, called Fast-SG, that uses a new ultrafast alignment-free algorithm specifically designed for constructing a scaffolding graph using light-weight data structures. Fast-SG can construct the graph from either short or long reads. This allows the reuse of efficient algorithms designed for short-read data and permits the definition of novel modular hybrid assembly pipelines. Using comprehensive standard datasets and benchmarks, we show how Fast-SG outperforms the state-of-the-art short-read aligners when building the scaffoldinggraph and can be used to extract linking information from either raw or error-corrected long reads. We also show how a hybrid assembly approach using Fast-SG with shallow long-read coverage (5X) and moderate computational resources can produce long-range and accurate reconstructions of the genomes of Arabidopsis thaliana (Ler-0) and human (NA12878). Fast-SG opens a door to achieve accurate hybrid long-range reconstructions of large genomes with low effort, high portability, and low cost.

  18. Anatomic Double Bundle single tunnel Foreign Material Free ACL-Reconstruction – a technical note

    PubMed Central

    Felmet, Gernot

    2011-01-01

    Summary The anterior cruciate ligament (ACL) consists of two bundles, the anteromedial (AM) and posterolateral bundle (PM). Double bundle reconstructions appear to give better rotational stability. The usual technique is to make two tunnels in the femur and two in the tibia. This is difficult and in small knees may not even be possible. We have developed a foreign material free press fit fixation for double bundle ACL reconstruction using a single femoral tunnel (R). This is based on the ALL PRESS FIT ACL reconstruction. It is suitable for the most common medium and, otherwise difficult, small sizes of knees. Method: Using diamond edged wet grinding hollow reamers, bone cylinders in different diameters are harvested from the implantation tunnels of the tibia and femur and used for the press fit fixation. Using the press fit technique the graft is first fixed in tibia. It is then similarly fixed under tension in the femoral side with the knee in 120 degree flexion. This is called Bottom To Top Fixation (BTT). On extending the knee the graft tension is self adapting. Depending on the size of the individual knee, the diameter of the femoral bone plug is varied from 8 to 13 mm to achieve an anatomic spread with a double bundle-like insertion. The tibia tunnel can be applied with two 7 or 8 mm diameter tunnels overlapping to a semi oval tunnel between 10 to 13 mm. Results: Since May 2003 we have carried out ACL-reconstructions with Hamstring grafts without foreign material using the ALL PRESS FIT technique. Initially, an 8 mm press fit fixation was used proximally with good results. Since April 2008, the range of diameters was increased up to 13 mm. The results of the Lachman tests have been good to excellent. Results of the Pivot shift test suggested more stability with femoral broader diameters of 9,5 to 13 mm. Conclusions: The foreign material free fixation of ham-string in the ALL PRESS FIT Bottom To Top Fixation is a successful method for ACL Reconstruction. The Diamond Instruments and tubed guiding devices are precise, reliable and easy to manage. On this basis a double bundle reconstruction is achieved using a single tunnel. A broad anatomic femoral insertion with autogenous bone plugs inserted near the cortex seems to improve rotational stability. PMID:23738263

  19. How Are We Measuring Patient Satisfaction After Anterior Cruciate Ligament Reconstruction?

    PubMed

    Kahlenberg, Cynthia A; Nwachukwu, Benedict U; Ferraro, Richard A; Schairer, William W; Steinhaus, Michael E; Allen, Answorth A

    2016-12-01

    Reconstruction of the anterior cruciate ligament (ACL) is one of the most common orthopaedic operations in the United States. The long-term impact of ACL reconstruction is controversial, however, as longer term data have failed to demonstrate that ACL reconstruction helps alter the natural history of early onset osteoarthritis that occurs after ACL injury. There is significant interest in evaluating the value of ACL reconstruction surgeries. To examine the quality of patient satisfaction reporting after ACL reconstruction surgery. Systematic review; Level of evidence, 4. A systematic review of the MEDLINE database was performed using the PubMed interface. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines as well as the PRISMA checklist were employed. The initial search yielded 267 studies. The inclusion criteria were: English language, US patient population, clinical outcome study of ACL reconstruction surgery, and reporting of patient satisfaction included in the study. Study quality was assessed using the Newcastle-Ottawa scale. A total of 22 studies met the inclusion criteria. These studies comprised a total of 1984 patients with a mean age of 31.9 years at the time of surgery and a mean follow-up period of 59.3 months. The majority of studies were evidence level 4 (n = 18; 81.8%), had a mean Newcastle-Ottawa scale score of 5.5, and were published before 2006 (n = 17; 77.3%); 5 studies (22.7%) failed to clearly describe their method for determining patient satisfaction. The most commonly used method for assessing satisfaction was a 0 to 10 satisfaction scale (n = 11; 50.0%). Among studies using a 0 to 10 scale, mean satisfaction ranged from 7.4 to 10.0. Patient-reported outcome and objective functional measures for ACL stability and knee function were positively correlated with patient satisfaction. Degenerative knee change was negatively correlated with satisfaction. The level of evidence for studies reporting patient satisfaction is low, and the methodologies for reporting patient satisfaction are variable. Additionally, within the past decade there has been a significant decline in the inclusion of this outcome measure within published ACL studies. As sports surgeons are increasingly called on to demonstrate the value of operative procedures, attention should be paid to understanding and reporting patient satisfaction.

  20. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  1. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  2. Detection and measurement of the intracellular calcium variation in follicular cells.

    PubMed

    Herrera-Navarro, Ana M; Terol-Villalobos, Iván R; Jiménez-Hernández, Hugo; Peregrina-Barreto, Hayde; Gonzalez-Barboza, José-Joel

    2014-01-01

    This work presents a new method for measuring the variation of intracellular calcium in follicular cells. The proposal consists in two stages: (i) the detection of the cell's nuclei and (ii) the analysis of the fluorescence variations. The first stage is performed via watershed modified transformation, where the process of labeling is controlled. The detection process uses the contours of the cells as descriptors, where they are enhanced with a morphological filter that homogenizes the luminance variation of the image. In the second stage, the fluorescence variations are modeled as an exponential decreasing function, where the fluorescence variations are highly correlated with the changes of intracellular free Ca(2+). Additionally, it is introduced a new morphological called medium reconstruction process, which helps to enhance the data for the modeling process. This filter exploits the undermodeling and overmodeling properties of reconstruction operators, such that it preserves the structure of the original signal. Finally, an experimental process shows evidence of the capabilities of the proposal.

  3. Detection and Measurement of the Intracellular Calcium Variation in Follicular Cells

    PubMed Central

    Herrera-Navarro, Ana M.; Terol-Villalobos, Iván R.; Jiménez-Hernández, Hugo; Peregrina-Barreto, Hayde; Gonzalez-Barboza, José-Joel

    2014-01-01

    This work presents a new method for measuring the variation of intracellular calcium in follicular cells. The proposal consists in two stages: (i) the detection of the cell's nuclei and (ii) the analysis of the fluorescence variations. The first stage is performed via watershed modified transformation, where the process of labeling is controlled. The detection process uses the contours of the cells as descriptors, where they are enhanced with a morphological filter that homogenizes the luminance variation of the image. In the second stage, the fluorescence variations are modeled as an exponential decreasing function, where the fluorescence variations are highly correlated with the changes of intracellular free Ca2+. Additionally, it is introduced a new morphological called medium reconstruction process, which helps to enhance the data for the modeling process. This filter exploits the undermodeling and overmodeling properties of reconstruction operators, such that it preserves the structure of the original signal. Finally, an experimental process shows evidence of the capabilities of the proposal. PMID:25342958

  4. 3D virtual character reconstruction from projections: a NURBS-based approach

    NASA Astrophysics Data System (ADS)

    Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.

    2004-05-01

    This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

  5. Reconstruction of 7T-Like Images From 3T MRI

    PubMed Central

    Bahrami, Khosro; Shi, Feng; Zong, Xiaopeng; Shin, Hae Won; An, Hongyu

    2016-01-01

    In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images. PMID:27046894

  6. Spectral Retrieval of Latent Heating Profiles from TRMM PR data. Part 3; Moistening Estimates over Tropical Ocean Regions

    NASA Technical Reports Server (NTRS)

    Shige, S.; Takayabu, Y.; Tao, W.-K.

    2007-01-01

    The global hydrological cycle is central to the Earth's climate system, with rainfall and the physics of precipitation formation acting as the key links in the cycle. Two-thirds of global rainfall occurs in the tropics with the associated latent heating (LH) accounting for threefourths of the total heat energy available to the Earth's atmosphere. In the last decade, it has been established that standard products of LH from satellite measurements, particularly TRMM measurements, would be a valuable resource for scientific research and applications. Such products would enable new insights and investigations concerning the complexities of convection system life cycles, the diabatic heating controls and feedbacks related to rne-sosynoptic circulations and their forecasting, the relationship of tropical patterns of LH to the global circulation and climate, and strategies for improving cloud parameterizations In environmental prediction models. However, the LH and water vapor profile or budget (called the apparent moisture sink, or Q2) is closely related. This paper presented the development of an algorithm for retrieving Q2 using 'TRMM precipitation radar. Since there is no direct measurement of LH and Q2, the validation of algorithm usually applies a method called consistency check. Consistency checking involving Cloud Resolving Model (CRM)-generated LH and 42 profiles and algorithm-reconstructed is a useful step in evaluating the performance of a given algorithm. In this process, the CRM simulation of a time-dependent precipitation process (multiple-day time series) is used to obtain the required input parameters for a given algorithm. The algorithm is then used to "econsti-LKth"e heating and moisture profiles that the CRM simulation originally produced, and finally both sets of conformal estimates (model and algorithm) are compared each other. The results indicate that discrepancies between the reconstructed and CM-simulated profiles for Q2, especially at low levels, are larger than those for latent heat. Larger discrepancies in Q2 at low levels are due to moistening for non-precipitating region that algorithm cannot reconstruct. Nevertheless, the algorithm-reconstructed total Q2 profiles are in good agreement with the CRM-simulated ones.

  7. Reconstruction of methods of execution of the death penalty by shooting in the years 1949-1954 based on exhumation research of "prison fields" in Osobowicki Cemetery in Wroclaw. Part I--Historical outline and results of research conducted prior to exhumations performed in 2011.

    PubMed

    Szleszkowski, Łukasz

    2012-01-01

    In the period between October and December 2011, a series of exhumation research of the so-called prison quarters dating back to 1949-1954 was conducted in Osobowicki Cemetery in Wrocław. Among the buried there were political prisoners executed by shooting--genuine or alleged members of post-war independence organizations. It was a unique opportunity to determine the method of execution of the death penalty in that period because, according to historical data and the results of two test exhumations, this method considerably differed from instructions on the use of a firing squad during execution of the death penalty.

  8. Reconstruction-Based Digital Dental Occlusion of the Partially Edentulous Dentition.

    PubMed

    Zhang, Jian; Xia, James J; Li, Jianfu; Zhou, Xiaobo

    2017-01-01

    Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models.

  9. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  10. Advanced prior modeling for 3D bright field electron tomography

    NASA Astrophysics Data System (ADS)

    Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.

    2015-03-01

    Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.

  11. The SF3M approach to 3-D photo-reconstruction for non-expert users: application to a gully network

    NASA Astrophysics Data System (ADS)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-04-01

    3-D photo-reconstruction (PR) techniques have been successfully used to produce high resolution elevation models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present in challenging scenarios. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-meters-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17% required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two light-weight automatic cameras (1 s time-lapse mode) and a 6 m-long pole is an efficient method for 3-D monitoring of gullies, at a low cost (about EUR 1000 budget for the field equipment) and time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  12. Polyenergetic known-component CT reconstruction with unknown material compositions and unknown x-ray spectra

    NASA Astrophysics Data System (ADS)

    Xu, S.; Uneri, A.; Khanna, A. Jay; Siewerdsen, J. H.; Stayman, J. W.

    2017-04-01

    Metal artifacts can cause substantial image quality issues in computed tomography. This is particularly true in interventional imaging where surgical tools or metal implants are in the field-of-view. Moreover, the region-of-interest is often near such devices which is exactly where image quality degradations are largest. Previous work on known-component reconstruction (KCR) has shown the incorporation of a physical model (e.g. shape, material composition, etc) of the metal component into the reconstruction algorithm can significantly reduce artifacts even near the edge of a metal component. However, for such approaches to be effective, they must have an accurate model of the component that include energy-dependent properties of both the metal device and the CT scanner, placing a burden on system characterization and component material knowledge. In this work, we propose a modified KCR approach that adopts a mixed forward model with a polyenergetic model for the component and a monoenergetic model for the background anatomy. This new approach called Poly-KCR jointly estimates a spectral transfer function associated with known components in addition to the background attenuation values. Thus, this approach eliminates both the need to know component material composition a prior as well as the requirement for an energy-dependent characterization of the CT scanner. We demonstrate the efficacy of this novel approach and illustrate its improved performance over traditional and model-based iterative reconstruction methods in both simulation studies and in physical data including an implanted cadaver sample.

  13. Scatter correction using a primary modulator on a clinical angiography C-arm CT system.

    PubMed

    Bier, Bastian; Berger, Martin; Maier, Andreas; Kachelrieß, Marc; Ritschl, Ludwig; Müller, Kerstin; Choi, Jang-Hwan; Fahrig, Rebecca

    2017-09-01

    Cone beam computed tomography (CBCT) suffers from a large amount of scatter, resulting in severe scatter artifacts in the reconstructions. Recently, a new scatter correction approach, called improved primary modulator scatter estimation (iPMSE), was introduced. That approach utilizes a primary modulator that is inserted between the X-ray source and the object. This modulation enables estimation of the scatter in the projection domain by optimizing an objective function with respect to the scatter estimate. Up to now the approach has not been implemented on a clinical angiography C-arm CT system. In our work, the iPMSE method is transferred to a clinical C-arm CBCT. Additional processing steps are added in order to compensate for the C-arm scanner motion and the automatic X-ray tube current modulation. These challenges were overcome by establishing a reference modulator database and a block-matching algorithm. Experiments with phantom and experimental in vivo data were performed to evaluate the method. We show that scatter correction using primary modulation is possible on a clinical C-arm CBCT. Scatter artifacts in the reconstructions are reduced with the newly extended method. Compared to a scan with a narrow collimation, our approach showed superior results with an improvement of the contrast and the contrast-to-noise ratio for the phantom experiments. In vivo data are evaluated by comparing the results with a scan with a narrow collimation and with a constant scatter correction approach. Scatter correction using primary modulation is possible on a clinical CBCT by compensating for the scanner motion and the tube current modulation. Scatter artifacts could be reduced in the reconstructions of phantom scans and in experimental in vivo data. © 2017 American Association of Physicists in Medicine.

  14. A comparison of manual neuronal reconstruction from biocytin histology or 2-photon imaging: morphometry and computer modeling

    PubMed Central

    Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper

    2014-01-01

    Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost. PMID:25071470

  15. Palaeoclimate reconstruction within the upper Eocene in central Germany using fossil plants

    NASA Astrophysics Data System (ADS)

    Moraweck, Karolin; Kunzmann, Lutz; Uhl, Dieter; Kleber, Arno

    2013-04-01

    The Eocene has been commonly called "The world`s last greenhouse period" covering the Paleocene-Eocene Thermal Maximum (PETM) as well as the Eocene-Oligocene turnover. In the mid-latitudes of Europe this turnover was characterized by pronounced climatic changes from subtropical towards temperate conditions that were accompanied by significant vegetational changes on land. Fossil plants are regarded as excellent palaeoenvironmental proxies, because leaf physiognomy often reflects climate conditions. The study site, the Paleogene Weißelster basin in central Germany, including fluvial, estuarine and lacustrine deposits, provides several excellently preserved megafloras for reconstructions of terrestrial palaeoclimate. For our case study we used material from different stratigraphic horizons within the late Eocene Zeitz megafloral assemblage recovered from the open-cast mines of Profen and Schleenhain. These horizons cover a time interval of ca. 3 Ma. The Zeitz megafloral assemblage ("Florenkomplex") was characterized by mainly evergreen, notophyllous vegetation, consisting of warm-temperate to subtropical elements. Tropical species are present but very rare. To infer the regional climatic conditions and putative climate changes from these fossil plants we compare proxy data obtained by the application of standard methods for quantitative reconstruction of palaeoclimate data: the coexistence approach (CA), leaf margin analysis (LMA) and Climate Leaf Analysis Multivariate Program (CLAMP).Before the CA was applied to the material the list of putative nearest living relative species (NLR) was carefully revisited and partly revised. In case of the LMA approach information of so-called "silent taxa" (fossil species preserved by diaspores, leaf margin state is inferred from NLR data) were partly included in the data set. The four floras from the Zeitz megafloral assemblage show slightly different floral compositions caused by various taphonomic processes. An aim of the investigations was to test whether these differences lead to differences in calculated mean annual temperatures (MAT) or not. The MAT, calculated by LMA for the four sites, remarkably differ in dependency on the incorporation of "silent taxa" whenever present. MAT based on leaf remains only is often higher, because of the overrepresentation of laurophyllous entire-margined leaves in the respective taphocoenoses. Inclusion of "silent taxa" that often represents species with un-toothed leaves significantly decreases calculated MAT. It is expected that CLAMP and CA will render more reliable results, which will be part of the discussion. The contribution will also focus on problems in the use of leaf physiognomy as palaeoclimatic proxies and on the comparison of results obtained from a single plant taphocoenosis using different methods for quantitative reconstructions of MAT.

  16. USAID Spent Almost $400 Million on an Afghan Stabilization Project despite Uncertain Results, but Has Taken Steps to Better Assess Similar Efforts

    DTIC Science & Technology

    2012-04-25

    totaling $151 million for a program called Stabilization in Key Areas ( SIKA ). This report assesses (1) the cost and outcomes of the LGCD project and...services to be provided, SIGAR recommends that the Mission Director, USAID/Afghanistan, direct contracting officers to ensure that the SIKA contracts...Provincial Reconstruction Team RFTOP Request for Task Order Proposal SIGAR Special Inspector General for Afghanistan Reconstruction SIKA Stabilization

  17. The perforator pedicled propeller (PPP) flap method: report of two cases.

    PubMed

    Hyakusoku, Hiko; Ogawa, Rei; Oki, Koichiro; Ishii, Nobuaki

    2007-10-01

    Perforator flaps are thin free-tissue transfers consisting of skin and subcutaneous tissue which have the advantage of decreasing donor site morbidity. We have reconstructed postburn scar contractures using "propeller flaps" of the remaining healthy skin around the recipient sites. In this paper, we report on two cases and describe the concept of using "perforator flaps" and "propeller flaps" together as what are called "perforator pedicled propeller (PPP) flaps." Patient 1 was an 18-year-old man with a sacral pressure ulcer. The soft tissue defect was reconstructed with a rotated superior gluteal artery PPP flap. Patient 2 was a 53-year-old woman who presented with an open fracture of the right elbow. The skin defect over the fracture was covered with a rotated deep brachial artery PPP flap raised on the lateral upper arm. The PPP flaps are useful for burn reconstruction and repairing various types of wound. Moreover, microsurgery is unnecessary. The PPP flap may be classified into two types: the central axis type and the acentric axis type. The central axis PPP flap is significant when used as a 90-degree-rotation island flap, and the acentric axis PPP flap is significant when used as a 180-degree-rotation island flap. Both types are easy to harvest and useful for repairing various kinds of wound.

  18. Wind speed time series reconstruction using a hybrid neural genetic approach

    NASA Astrophysics Data System (ADS)

    Rodriguez, H.; Flores, J. J.; Puig, V.; Morales, L.; Guerra, A.; Calderon, F.

    2017-11-01

    Currently, electric energy is used in practically all modern human activities. Most of the energy produced came from fossil fuels, making irreversible damage to the environment. Lately, there has been an effort by nations to produce energy using clean methods, such as solar and wind energy, among others. Wind energy is one of the cleanest alternatives. However, the wind speed is not constant, making the planning and operation at electric power systems a difficult activity. Knowing in advance the amount of raw material (wind speed) used for energy production allows us to estimate the energy to be generated by the power plant, helping the maintenance planning, the operational management, optimal operational cost. For these reasons, the forecast of wind speed becomes a necessary task. The forecast process involves the use of past observations from the variable to forecast (wind speed). To measure wind speed, weather stations use devices called anemometers, but due to poor maintenance, connection error, or natural wear, they may present false or missing data. In this work, a hybrid methodology is proposed, and it uses a compact genetic algorithm with an artificial neural network to reconstruct wind speed time series. The proposed methodology reconstructs the time series using a ANN defined by a Compact Genetic Algorithm.

  19. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  20. Image reconstruction of x-ray tomography by using image J platform

    NASA Astrophysics Data System (ADS)

    Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.

    2017-01-01

    A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.

  1. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less

  2. AutoStitcher: An Automated Program for Efficient and Robust Reconstruction of Digitized Whole Histological Sections from Tissue Fragments

    NASA Astrophysics Data System (ADS)

    Penzias, Gregory; Janowczyk, Andrew; Singanamalli, Asha; Rusu, Mirabela; Shih, Natalie; Feldman, Michael; Stricker, Phillip D.; Delprado, Warick; Tiwari, Sarita; Böhm, Maret; Haynes, Anne-Maree; Ponsky, Lee; Viswanath, Satish; Madabhushi, Anant

    2016-07-01

    In applications involving large tissue specimens that have been sectioned into smaller tissue fragments, manual reconstruction of a “pseudo whole-mount” histological section (PWMHS) can facilitate (a) pathological disease annotation, and (b) image registration and correlation with radiological images. We have previously presented a program called HistoStitcher, which allows for more efficient manual reconstruction than general purpose image editing tools (such as Photoshop). However HistoStitcher is still manual and hence can be laborious and subjective, especially when doing large cohort studies. In this work we present AutoStitcher, a novel automated algorithm for reconstructing PWMHSs from digitized tissue fragments. AutoStitcher reconstructs (“stitches”) a PWMHS from a set of 4 fragments by optimizing a novel cost function that is domain-inspired to ensure (i) alignment of similar tissue regions, and (ii) contiguity of the prostate boundary. The algorithm achieves computational efficiency by performing reconstruction in a multi-resolution hierarchy. Automated PWMHS reconstruction results (via AutoStitcher) were quantitatively and qualitatively compared to manual reconstructions obtained via HistoStitcher for 113 prostate pathology sections. Distances between corresponding fiducials placed on each of the automated and manual reconstruction results were between 2.7%-3.2%, reflecting their excellent visual similarity.

  3. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  4. Muscle Activity Map Reconstruction from High Density Surface EMG Signals With Missing Channels Using Image Inpainting and Surface Reconstruction Methods.

    PubMed

    Ghaderi, Parviz; Marateb, Hamid R

    2017-07-01

    The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.

  5. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  7. [Application of Fourier transform profilometry in 3D-surface reconstruction].

    PubMed

    Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing

    2011-08-01

    With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.

  8. Computer Reconstruction of Spirit Predicament

    NASA Image and Video Library

    2009-11-04

    A screen shot from software used by the Mars Exploration Rover team for assessing movements by Spirit and Opportunity illustrates the degree to which Spirit wheels have become embedded in soft material at the location called Troy.

  9. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  10. Reconstruction-based Digital Dental Occlusion of the Partially Edentulous Dentition

    PubMed Central

    Zhang, Jian; Xia, James J.; Li, Jianfu; Zhou, Xiaobo

    2016-01-01

    Partially edentulous dentition presents a challenging problem for the surgical planning of digital dental occlusion in the field of craniomaxillofacial surgery because of the incorrect maxillomandibular distance caused by missing teeth. We propose an innovative approach called Dental Reconstruction with Symmetrical Teeth (DRST) to achieve accurate dental occlusion for the partially edentulous cases. In this DRST approach, the rigid transformation between two symmetrical teeth existing on the left and right dental model is estimated through probabilistic point registration by matching the two shapes. With the estimated transformation, the partially edentulous space can be virtually filled with the teeth in its symmetrical position. Dental alignment is performed by digital dental occlusion reestablishment algorithm with the reconstructed complete dental model. Satisfactory reconstruction and occlusion results are demonstrated with the synthetic and real partially edentulous models. PMID:26584502

  11. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  12. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    PubMed

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  13. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  14. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.

  15. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  16. Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Chluba, J.

    2018-07-01

    Correlations between cosmic microwave background (CMB) temperature, polarization, and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, and PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separations. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.

  17. Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Chluba, J.

    2018-04-01

    Correlations between cosmic microwave background (CMB) temperature, polarization and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separation. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.

  18. Segmented Separable Footprint Projector for Digital Breast Tomosynthesis and Its application for Subpixel Reconstruction

    PubMed Central

    Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping

    2017-01-01

    Purpose Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. Methods We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast-tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. Results The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres, but also produced more smooth and continuous reconstructed images whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon’s algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. Conclusions The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction. PMID:28058719

  19. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  20. Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.

    PubMed

    Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G

    2015-01-01

    A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.

  1. [Evaluation of the external tissue extender (Ete) in secondary wound closure].

    PubMed

    Zutt, Markus; Beckmann, Iris; Kretschmer, Lutz

    2003-09-01

    For surgical closure of large skin defects, elaborate reconstructive plastic surgery or other methods such as internal subcutaneous balloon tissue expanders are required in order to avoid tension on the closure margins. Here we point to the benefits and disadvantages of an improved and simple method of secondary wound closure by secondary sutures. We employed a system called External Tissue Extender (ETE), which consists of silicone strings and plastic stoppers pulling the corresponding surgical sites together and evenly distributing the tension. Possible indications in dermatologic surgery and our experiences with this technique are outlined. Implantation and handling of the ETE are very easy and fast. The functional results are good and the cosmetic outcome satisfactory. More invasive surgical procedures can be avoided by using this method. A major disadvantage is the possibility of developing necrosis under the plastic stoppers. According to our experience, the ETE is a useful alternative indicated in certain dermatosurgical situations.

  2. Fast computation of radiation pressure force exerted by multiple laser beams on red blood cell-like particles

    NASA Astrophysics Data System (ADS)

    Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing

    2016-10-01

    Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.

  3. A spot pattern test chart technique for measurement of geometric aberrations caused by an intervening medium—a novel method

    NASA Astrophysics Data System (ADS)

    Ganesan, A. R.; Arulmozhivarman, P.; Jesson, M.

    2005-12-01

    Accurate surface metrology and transmission characteristics measurements have become vital to certify the manufacturing excellence in the field of glass visors, windshields, menu boards and transportation industries. We report a simple, cost-effective and novel technique for the measurement of geometric aberrations in transparent materials such as glass sheets, Perspex, etc. The technique makes use of an array of spot pattern, we call the spot pattern test chart technique, in the diffraction limited imaging position having large field of view. Performance features include variable angular dynamic range and angular sensitivity. Transparent sheets as the intervening medium introduced in the line of sight, causing aberrations, are estimated in real time using the Zernike reconstruction method. Quantitative comparative analysis between a Shack-Hartmann wavefront sensor and the proposed new method is presented and the results are discussed.

  4. Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method

    DOE PAGES

    Miao, C.; Sundaram, B. M.; Huang, L.; ...

    2016-08-09

    Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squaresmore » integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.« less

  5. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    PubMed

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  6. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    PubMed Central

    Cengiz, Kubra

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

  7. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox.

  8. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  9. Establishing mesh topology in multi-material cells: enabling technology for robust and accurate multi-material simulations

    DOE PAGES

    Kikinzon, Evgeny; Shashkov, Mikhail Jurievich; Garimella, Rao Veerabhadra

    2018-05-29

    Real world problems are typically multi-material, combining materials such as gases, liquids and solids that have very different properties. The material interfaces may be fixed in time or can be a part of the solution, as in fluid-structure interactions or air-water dynamics, and therefore move and change shape. In such problems the computational mesh may be non-conformal to interfaces due to complexity of these interfaces, presence of small fractions of materials, or because the mesh does not move with the flow, as in the arbitrary Lagrangian–Eulerian (ALE) methods. In order to solve problems of interest on such meshes, interface reconstructionmore » methods are usually used to recover an approximation of material regions within the cells. For a cell intersecting multiple material regions, these approximations of contained subregions can be considered as single-material subcells in a local mesh that we call a minimesh. In this paper, we discuss some of the requirements that discretization methods have on topological information in the resulting hierarchical meshes and present an approach that allows incorporating the buildup of sufficiently detailed topology into the nested dissections based PLIC-type reconstruction algorithms (e.g. Volume-of-Fluid, Moment-of-Fluid) in an efficient and robust manner. Specifically, we describe the X-MOF interface reconstruction algorithm in 2D, which extends the Moment-Of-Fluid (MOF) method to include the topology of minimeshes created inside of multi-material cells and parent-child relations between corresponding mesh entities on different hierarchy levels. X-MOF retains the property of being local to a cell and not requiring external communication, which makes it suitable for massively parallel applications. Here, we demonstrate some scaling results for the X-MOF implementation in Tangram, a modern interface reconstruction framework for exascale computing.« less

  10. Establishing mesh topology in multi-material cells: enabling technology for robust and accurate multi-material simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kikinzon, Evgeny; Shashkov, Mikhail Jurievich; Garimella, Rao Veerabhadra

    Real world problems are typically multi-material, combining materials such as gases, liquids and solids that have very different properties. The material interfaces may be fixed in time or can be a part of the solution, as in fluid-structure interactions or air-water dynamics, and therefore move and change shape. In such problems the computational mesh may be non-conformal to interfaces due to complexity of these interfaces, presence of small fractions of materials, or because the mesh does not move with the flow, as in the arbitrary Lagrangian–Eulerian (ALE) methods. In order to solve problems of interest on such meshes, interface reconstructionmore » methods are usually used to recover an approximation of material regions within the cells. For a cell intersecting multiple material regions, these approximations of contained subregions can be considered as single-material subcells in a local mesh that we call a minimesh. In this paper, we discuss some of the requirements that discretization methods have on topological information in the resulting hierarchical meshes and present an approach that allows incorporating the buildup of sufficiently detailed topology into the nested dissections based PLIC-type reconstruction algorithms (e.g. Volume-of-Fluid, Moment-of-Fluid) in an efficient and robust manner. Specifically, we describe the X-MOF interface reconstruction algorithm in 2D, which extends the Moment-Of-Fluid (MOF) method to include the topology of minimeshes created inside of multi-material cells and parent-child relations between corresponding mesh entities on different hierarchy levels. X-MOF retains the property of being local to a cell and not requiring external communication, which makes it suitable for massively parallel applications. Here, we demonstrate some scaling results for the X-MOF implementation in Tangram, a modern interface reconstruction framework for exascale computing.« less

  11. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  12. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  13. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  14. Volumetric MRI of the lungs during forced expiration.

    PubMed

    Berman, Benjamin P; Pandey, Abhishek; Li, Zhitao; Jeffries, Lindsie; Trouard, Theodore P; Oliva, Isabel; Cortopassi, Felipe; Martin, Diego R; Altbach, Maria I; Bilgin, Ali

    2016-06-01

    Lung function is typically characterized by spirometer measurements, which do not offer spatially specific information. Imaging during exhalation provides spatial information but is challenging due to large movement over a short time. The purpose of this work is to provide a solution to lung imaging during forced expiration using accelerated magnetic resonance imaging. The method uses radial golden angle stack-of-stars gradient echo acquisition and compressed sensing reconstruction. A technique for dynamic three-dimensional imaging of the lungs from highly undersampled data is developed and tested on six subjects. This method takes advantage of image sparsity, both spatially and temporally, including the use of reference frames called bookends. Sparsity, with respect to total variation, and residual from the bookends, enables reconstruction from an extremely limited amount of data. Dynamic three-dimensional images can be captured at sub-150 ms temporal resolution, using only three (or less) acquired radial lines per slice per timepoint. The images have a spatial resolution of 4.6×4.6×10 mm. Lung volume calculations based on image segmentation are compared to those from simultaneously acquired spirometer measurements. Dynamic lung imaging during forced expiration is made possible by compressed sensing accelerated dynamic three-dimensional radial magnetic resonance imaging. Magn Reson Med 75:2295-2302, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  15. Enhanced analysis of real-time PCR data by using a variable efficiency model: FPK-PCR

    PubMed Central

    Lievens, Antoon; Van Aelst, S.; Van den Bulcke, M.; Goetghebeur, E.

    2012-01-01

    Current methodology in real-time Polymerase chain reaction (PCR) analysis performs well provided PCR efficiency remains constant over reactions. Yet, small changes in efficiency can lead to large quantification errors. Particularly in biological samples, the possible presence of inhibitors forms a challenge. We present a new approach to single reaction efficiency calculation, called Full Process Kinetics-PCR (FPK-PCR). It combines a kinetically more realistic model with flexible adaptation to the full range of data. By reconstructing the entire chain of cycle efficiencies, rather than restricting the focus on a ‘window of application’, one extracts additional information and loses a level of arbitrariness. The maximal efficiency estimates returned by the model are comparable in accuracy and precision to both the golden standard of serial dilution and other single reaction efficiency methods. The cycle-to-cycle changes in efficiency, as described by the FPK-PCR procedure, stay considerably closer to the data than those from other S-shaped models. The assessment of individual cycle efficiencies returns more information than other single efficiency methods. It allows in-depth interpretation of real-time PCR data and reconstruction of the fluorescence data, providing quality control. Finally, by implementing a global efficiency model, reproducibility is improved as the selection of a window of application is avoided. PMID:22102586

  16. Modeling repetitive motions using structured light.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2010-01-01

    Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.

  17. Inference of Ancestral Recombination Graphs through Topological Data Analysis

    PubMed Central

    Cámara, Pablo G.; Levine, Arnold J.; Rabadán, Raúl

    2016-01-01

    The recent explosion of genomic data has underscored the need for interpretable and comprehensive analyses that can capture complex phylogenetic relationships within and across species. Recombination, reassortment and horizontal gene transfer constitute examples of pervasive biological phenomena that cannot be captured by tree-like representations. Starting from hundreds of genomes, we are interested in the reconstruction of potential evolutionary histories leading to the observed data. Ancestral recombination graphs represent potential histories that explicitly accommodate recombination and mutation events across orthologous genomes. However, they are computationally costly to reconstruct, usually being infeasible for more than few tens of genomes. Recently, Topological Data Analysis (TDA) methods have been proposed as robust and scalable methods that can capture the genetic scale and frequency of recombination. We build upon previous TDA developments for detecting and quantifying recombination, and present a novel framework that can be applied to hundreds of genomes and can be interpreted in terms of minimal histories of mutation and recombination events, quantifying the scales and identifying the genomic locations of recombinations. We implement this framework in a software package, called TARGet, and apply it to several examples, including small migration between different populations, human recombination, and horizontal evolution in finches inhabiting the Galápagos Islands. PMID:27532298

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yang; Li, Si-Yu; Li, Yong-Ping

    The study of reionization history plays an important role in understanding the evolution of our universe. It is commonly believed that the intergalactic medium (IGM) in our universe are fully ionized today, however the reionizing process remains to be mysterious. A simple instantaneous reionization process is usually adopted in modern cosmology without direct observational evidence. However, the history of ionization fraction, x{sub e}(z) will influence CMB observables and constraints on optical depth τ. With the mocked future data sets based on featured reionization model, we find the bias on τ introduced by instantaneous model can not be neglected. In thismore » paper, we study the cosmic reionization history in a model independent way, the so called principle component analysis (PCA) method, and reconstruct x{sub e} (z) at different redshift z with the data sets of Planck, WMAP 9 years temperature and polarization power spectra, combining with the baryon acoustic oscillation (BAO) from galaxy survey and type Ia supernovae (SN) Union 2.1 sample respectively. The results show that reconstructed x{sub e}(z) is consistent with instantaneous behavior, however, there exists slight deviation from this behavior at some epoch. With PCA method, after abandoning the noisy modes, we get stronger constraints, and the hints for featured x{sub e}(z) evolution could become a little more obvious.« less

  19. Interior reconstruction method based on rotation-translation scanning model.

    PubMed

    Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian

    2014-01-01

    In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.

  20. Integration of Component Knowledge in Penalized-Likelihood Reconstruction with Morphological and Spectral Uncertainties.

    PubMed

    Stayman, J Webster; Tilley, Steven; Siewerdsen, Jeffrey H

    2014-01-01

    Previous investigations [1-3] have demonstrated that integrating specific knowledge of the structure and composition of components like surgical implants, devices, and tools into a model-based reconstruction framework can improve image quality and allow for potential exposure reductions in CT. Using device knowledge in practice is complicated by uncertainties in the exact shape of components and their particular material composition. Such unknowns in the morphology and attenuation properties lead to errors in the forward model that limit the utility of component integration. In this work, a methodology is presented to accommodate both uncertainties in shape as well as unknown energy-dependent attenuation properties of the surgical devices. This work leverages the so-called known-component reconstruction (KCR) framework [1] with a generalized deformable registration operator and modifications to accommodate a spectral transfer function in the component model. Moreover, since this framework decomposes the object into separate background anatomy and "known" component factors, a mixed fidelity forward model can be adopted so that measurements associated with projections through the surgical devices can be modeled with much greater accuracy. A deformable KCR (dKCR) approach using the mixed fidelity model is introduced and applied to a flexible wire component with unknown structure and composition. Image quality advantages of dKCR over traditional reconstruction methods are illustrated in cone-beam CT (CBCT) data acquired on a testbench emulating a 3D-guided needle biopsy procedure - i.e., a deformable component (needle) with strong energy-dependent attenuation characteristics (steel) within a complex soft-tissue background.

  1. SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network

    NASA Astrophysics Data System (ADS)

    Castillo, C.; James, M. R.; Redel-Macías, M. D.; Pérez, R.; Gómez, J. A.

    2015-08-01

    Three-dimensional photo-reconstruction (PR) techniques have been successfully used to produce high-resolution surface models for different applications and over different spatial scales. However, innovative approaches are required to overcome some limitations that this technique may present for field image acquisition in challenging scene geometries. Here, we evaluate SF3M, a new graphical user interface for implementing a complete PR workflow based on freely available software (including external calls to VisualSFM and CloudCompare), in combination with a low-cost survey design for the reconstruction of a several-hundred-metres-long gully network. SF3M provided a semi-automated workflow for 3-D reconstruction requiring ~ 49 h (of which only 17 % required operator assistance) for obtaining a final gully network model of > 17 million points over a gully plan area of 4230 m2. We show that a walking itinerary along the gully perimeter using two lightweight automatic cameras (1 s time-lapse mode) and a 6 m long pole is an efficient method for 3-D monitoring of gullies, at a low cost (~ EUR 1000 budget for the field equipment) and the time requirements (~ 90 min for image collection). A mean error of 6.9 cm at the ground control points was found, mainly due to model deformations derived from the linear geometry of the gully and residual errors in camera calibration. The straightforward image collection and processing approach can be of great benefit for non-expert users working on gully erosion assessment.

  2. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  3. Extending the FairRoot framework to allow for simulation and reconstruction of free streaming data

    NASA Astrophysics Data System (ADS)

    Al-Turany, M.; Klein, D.; Manafov, A.; Rybalchenko, A.; Uhlig, F.

    2014-06-01

    The FairRoot framework is the standard framework for simulation, reconstruction and data analysis for the FAIR experiments. The framework is designed to optimise the accessibility for beginners and developers, to be flexible and to cope with future developments. FairRoot enhances the synergy between the different physics experiments. As a first step toward simulation of free streaming data, the time based simulation was introduced to the framework. The next step is the event source simulation. This is achieved via a client server system. After digitization the so called "samplers" can be started, where sampler can read the data of the corresponding detector from the simulation files and make it available for the reconstruction clients. The system makes it possible to develop and validate the online reconstruction algorithms. In this work, the design and implementation of the new architecture and the communication layer will be described.

  4. Step-by-Step Technique for Segmental Reconstruction of Reverse Hill-Sachs Lesions Using Homologous Osteochondral Allograft.

    PubMed

    Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P

    2017-06-01

    Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.

  5. Future Research Challenges for a Computer-Based Interpretative 3D Reconstruction of Cultural Heritage - A German Community's View

    NASA Astrophysics Data System (ADS)

    Münster, S.; Kuroczyński, P.; Pfarr-Harfst, M.; Grellert, M.; Lengyel, D.

    2015-08-01

    The workgroup for Digital Reconstruction of the Digital Humanities in the German-speaking area association (Digital Humanities im deutschsprachigen Raum e.V.) was founded in 2014 as cross-disciplinary scientific society dealing with all aspects of digital reconstruction of cultural heritage and currently involves more than 40 German researchers. Moreover, the workgroup is dedicated to synchronise and foster methodological research for these topics. As one preliminary result a memorandum was created to name urgent research challenges and prospects in a condensed way and assemble a research agenda which could propose demands for further research and development activities within the next years. The version presented within this paper was originally created as a contribution to the so-called agenda development process initiated by the German Federal Ministry of Education and Research (BMBF) in 2014 and has been amended during a joint meeting of the digital reconstruction workgroup in November 2014.

  6. Quantitative reconstruction of refractive index distribution and imaging of glucose concentration by using diffusing light.

    PubMed

    Liang, Xiaoping; Zhang, Qizhi; Jiang, Huabei

    2006-11-10

    We show that a two-step reconstruction method can be adapted to improve the quantitative accuracy of the refractive index reconstruction in phase-contrast diffuse optical tomography (PCDOT). We also describe the possibility of imaging tissue glucose concentration with PCDOT. In this two-step method, we first use our existing finite-element reconstruction algorithm to recover the position and shape of a target. We then use the position and size of the target as a priori information to reconstruct a single value of the refractive index within the target and background regions using a region reconstruction method. Due to the extremely low contrast available in the refractive index reconstruction, we incorporate a data normalization scheme into the two-step reconstruction to combat the associated low signal-to-noise ratio. Through a series of phantom experiments we find that this two-step reconstruction method can considerably improve the quantitative accuracy of the refractive index reconstruction. The results show that the relative error of the reconstructed refractive index is reduced from 20% to within 1.5%. We also demonstrate the possibility of PCDOT for recovering glucose concentration using these phantom experiments.

  7. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  8. Forward model with space-variant of source size for reconstruction on X-ray radiographic image

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan

    2018-03-01

    The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.

  9. Impact of prehistoric cooking practices on paleoenvironmental proxies in shell midden constituents

    NASA Astrophysics Data System (ADS)

    Müller, Peter; Staudigel, Philip; Murray, Sean T.; Westphal, Hildegard; Swart, Peter K.

    2016-04-01

    Paleoenvironmental proxy records such as oxygen isotopes of calcareous skeletal structures like fish otoliths or mollusk shells provide highest-resolution information about environmental conditions experienced by the organism. Accumulations of such skeletal structures by ancient coastal populations in so called "shell midden" deposits provide us with sub-seasonally resolved paleoclimate records covering time spans up to several millennia. Given their high temporal resolution, these deposits are increasingly used for paleoclimate reconstructions and complement our understanding of ancient climate changes. However, gathered as comestibles, most of these skeletal remains were subject to prehistoric cooking methods prior to deposition. The associated alteration of the chemical proxy signatures as well as the subsequent error for paleoenvironmental reconstructions remained almost entirely neglected so far. Here, we present clumped isotope, conventional oxygen and carbon isotopes as well as element:Ca ratios measured in modern bivalve shells after exposing them to different prehistoric cooking methods. Our data show that most cooking methods considerably alter commonly used paleoclimate proxy systems which can lead to substantial misinterpretations of ancient climate conditions. Since the magnitude of chemical alteration is not distinguishable from natural temperature variability in most coastal settings, the alteration of shell midden constituents by prehistoric cooking remains likely unnoticed in most cases. Thus, depending on the cooking method, pre-depositional heating might have introduced considerable errors into previous paleoclimate studies. However, our data also show that clumped isotope thermometry represents a suitable diagnostic tool to detect such pre-depositional cooking events and also allows differentiating between the most commonly applied prehistoric cooking methods.

  10. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    PubMed

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  11. Reconstructing the Auditory Apparatus of Therapsids by Means of Neutron Tomography

    NASA Astrophysics Data System (ADS)

    Laaß, Michael; Schillinger, Burkhard

    The internal cranial structure of mammalian ancestors, i.e. the therapsids or ;mammal-like reptiles;, is crucial for understanding the early mammalian evolution. In the past therapsid skulls were investigated by mechanical sectioning or serial grinding, which was a very time-consuming and destructive process and could only be applied to non-valuable or poorly preserved specimens. As most therapsid skulls are embedded in terrestrial iron-rich sediments of Late Permian or Triassic age, i.e. so called ;Red beds;, a successful investigation with X-Rays is often not possible. We successfully investigated therapsid skulls by means of neutron tomography at the facility ANTARES at FRM II in Munich using cold neutron radiation. This kind of radiation is able to penetrate iron-rich substances in the range between 5 and 15 cm and produces a good contrast between matrix and bones, which enables segmentation of internal cranial structures such as bones, cavities and canals of nerves and blood vessels. In particular, neutron tomography combined with methods of 3D modeling was used here for the investigation and reconstruction of the auditory apparatus of therapsids.

  12. Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives

    NASA Astrophysics Data System (ADS)

    Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri

    2016-02-01

    The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings.

  13. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  14. WE-FG-207B-02: Material Reconstruction for Spectral Computed Tomography with Detector Response Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, J; Gao, H

    2016-06-15

    Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  15. TreeShrink: fast and accurate detection of outlier long branches in collections of phylogenetic trees.

    PubMed

    Mai, Uyen; Mirarab, Siavash

    2018-05-08

    Sequence data used in reconstructing phylogenetic trees may include various sources of error. Typically errors are detected at the sequence level, but when missed, the erroneous sequences often appear as unexpectedly long branches in the inferred phylogeny. We propose an automatic method to detect such errors. We build a phylogeny including all the data then detect sequences that artificially inflate the tree diameter. We formulate an optimization problem, called the k-shrink problem, that seeks to find k leaves that could be removed to maximally reduce the tree diameter. We present an algorithm to find the exact solution for this problem in polynomial time. We then use several statistical tests to find outlier species that have an unexpectedly high impact on the tree diameter. These tests can use a single tree or a set of related gene trees and can also adjust to species-specific patterns of branch length. The resulting method is called TreeShrink. We test our method on six phylogenomic biological datasets and an HIV dataset and show that the method successfully detects and removes long branches. TreeShrink removes sequences more conservatively than rogue taxon removal and often reduces gene tree discordance more than rogue taxon removal once the amount of filtering is controlled. TreeShrink is an effective method for detecting sequences that lead to unrealistically long branch lengths in phylogenetic trees. The tool is publicly available at https://github.com/uym2/TreeShrink .

  16. Resolving the morphology of niobium carbonitride nano-precipitates in steel using atom probe tomography.

    PubMed

    Breen, Andrew J; Xie, Kelvin Y; Moody, Michael P; Gault, Baptiste; Yen, Hung-Wei; Wong, Christopher C; Cairney, Julie M; Ringer, Simon P

    2014-08-01

    Atom probe is a powerful technique for studying the composition of nano-precipitates, but their morphology within the reconstructed data is distorted due to the so-called local magnification effect. A new technique has been developed to mitigate this limitation by characterizing the distribution of the surrounding matrix atoms, rather than those contained within the nano-precipitates themselves. A comprehensive chemical analysis enables further information on size and chemistry to be obtained. The method enables new insight into the morphology and chemistry of niobium carbonitride nano-precipitates within ferrite for a series of Nb-microalloyed ultra-thin cast strip steels. The results are supported by complementary high-resolution transmission electron microscopy.

  17. Strong Measurements Give a Better Direct Measurement of the Quantum Wave Function.

    PubMed

    Vallone, Giuseppe; Dequal, Daniele

    2016-01-29

    Weak measurements have thus far been considered instrumental in the so-called direct measurement of the quantum wave function [4J. S. Lundeen, Nature (London) 474, 188 (2011).]. Here we show that a direct measurement of the wave function can be obtained by using measurements of arbitrary strength. In particular, in the case of strong measurements, i.e., those in which the coupling between the system and the measuring apparatus is maximum, we compared the precision and the accuracy of the two methods, by showing that strong measurements outperform weak measurements in both for arbitrary quantum states in most cases. We also give the exact expression of the difference between the original and reconstructed wave function obtained by the weak measurement approach; this will allow one to define the range of applicability of such a method.

  18. The historical biogeography of Mammalia

    PubMed Central

    Springer, Mark S.; Meredith, Robert W.; Janecka, Jan E.; Murphy, William J.

    2011-01-01

    Palaeobiogeographic reconstructions are underpinned by phylogenies, divergence times and ancestral area reconstructions, which together yield ancestral area chronograms that provide a basis for proposing and testing hypotheses of dispersal and vicariance. Methods for area coding include multi-state coding with a single character, binary coding with multiple characters and string coding. Ancestral reconstruction methods are divided into parsimony versus Bayesian/likelihood approaches. We compared nine methods for reconstructing ancestral areas for placental mammals. Ambiguous reconstructions were a problem for all methods. Important differences resulted from coding areas based on the geographical ranges of extant species versus the geographical provenance of the oldest fossil for each lineage. Africa and South America were reconstructed as the ancestral areas for Afrotheria and Xenarthra, respectively. Most methods reconstructed Eurasia as the ancestral area for Boreoeutheria, Euarchontoglires and Laurasiatheria. The coincidence of molecular dates for the separation of Afrotheria and Xenarthra at approximately 100 Ma with the plate tectonic sundering of Africa and South America hints at the importance of vicariance in the early history of Placentalia. Dispersal has also been important including the origins of Madagascar's endemic mammal fauna. Further studies will benefit from increased taxon sampling and the application of new ancestral area reconstruction methods. PMID:21807730

  19. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  20. Reconstructing the temporal ordering of biological samples using microarray data.

    PubMed

    Magwene, Paul M; Lizardi, Paul; Kim, Junhyong

    2003-05-01

    Accurate time series for biological processes are difficult to estimate due to problems of synchronization, temporal sampling and rate heterogeneity. Methods are needed that can utilize multi-dimensional data, such as those resulting from DNA microarray experiments, in order to reconstruct time series from unordered or poorly ordered sets of observations. We present a set of algorithms for estimating temporal orderings from unordered sets of sample elements. The techniques we describe are based on modifications of a minimum-spanning tree calculated from a weighted, undirected graph. We demonstrate the efficacy of our approach by applying these techniques to an artificial data set as well as several gene expression data sets derived from DNA microarray experiments. In addition to estimating orderings, the techniques we describe also provide useful heuristics for assessing relevant properties of sample datasets such as noise and sampling intensity, and we show how a data structure called a PQ-tree can be used to represent uncertainty in a reconstructed ordering. Academic implementations of the ordering algorithms are available as source code (in the programming language Python) on our web site, along with documentation on their use. The artificial 'jelly roll' data set upon which the algorithm was tested is also available from this web site. The publicly available gene expression data may be found at http://genome-www.stanford.edu/cellcycle/ and http://caulobacter.stanford.edu/CellCycle/.

  1. Phylogeny Inference of Closely Related Bacterial Genomes: Combining the Features of Both Overlapping Genes and Collinear Genomic Regions

    PubMed Central

    Zhang, Yan-Cong; Lin, Kui

    2015-01-01

    Overlapping genes (OGs) represent one type of widespread genomic feature in bacterial genomes and have been used as rare genomic markers in phylogeny inference of closely related bacterial species. However, the inference may experience a decrease in performance for phylogenomic analysis of too closely or too distantly related genomes. Another drawback of OGs as phylogenetic markers is that they usually take little account of the effects of genomic rearrangement on the similarity estimation, such as intra-chromosome/genome translocations, horizontal gene transfer, and gene losses. To explore such effects on the accuracy of phylogeny reconstruction, we combine phylogenetic signals of OGs with collinear genomic regions, here called locally collinear blocks (LCBs). By putting these together, we refine our previous metric of pairwise similarity between two closely related bacterial genomes. As a case study, we used this new method to reconstruct the phylogenies of 88 Enterobacteriale genomes of the class Gammaproteobacteria. Our results demonstrated that the topological accuracy of the inferred phylogeny was improved when both OGs and LCBs were simultaneously considered, suggesting that combining these two phylogenetic markers may reduce, to some extent, the influence of gene loss on phylogeny inference. Such phylogenomic studies, we believe, will help us to explore a more effective approach to increasing the robustness of phylogeny reconstruction of closely related bacterial organisms. PMID:26715828

  2. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  3. SU-D-206-01: Employing a Novel Consensus Optimization Strategy to Achieve Iterative Cone Beam CT Reconstruction On a Multi-GPU Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z

    Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less

  4. Evolution of the solar irradiance during the Holocene

    NASA Astrophysics Data System (ADS)

    Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.

    2011-07-01

    Context. Long-term records of solar radiative output are vital for understanding solar variability and past climate change. Measurements of solar irradiance are available for only the last three decades, which calls for reconstructions of this quantity over longer time scales using suitable models. Aims: We present a physically consistent reconstruction of the total solar irradiance for the Holocene. Methods: We extend the SATIRE (Spectral And Total Irradiance REconstruction) models to estimate the evolution of the total (and partly spectral) solar irradiance over the Holocene. The basic assumption is that the variations of the solar irradiance are due to the evolution of the dark and bright magnetic features on the solar surface. The evolution of the decadally averaged magnetic flux is computed from decadal values of cosmogenic isotope concentrations recorded in natural archives employing a series of physics-based models connecting the processes from the modulation of the cosmic ray flux in the heliosphere to their record in natural archives. We then compute the total solar irradiance (TSI) as a linear combination of the jth and jth + 1 decadal values of the open magnetic flux. In order to evaluate the uncertainties due to the evolution of the Earth's magnetic dipole moment, we employ four reconstructions of the open flux which are based on conceptually different paleomagnetic models. Results: Reconstructions of the TSI over the Holocene, each valid for a different paleomagnetic time series, are presented. Our analysis suggests that major sources of uncertainty in the TSI in this model are the heritage of the uncertainty of the TSI since 1610 reconstructed from sunspot data and the uncertainty of the evolution of the Earth's magnetic dipole moment. The analysis of the distribution functions of the reconstructed irradiance for the last 3000 years, which is the period that the reconstructions overlap, indicates that the estimates based on the virtual axial dipole moment are significantly lower at earlier times than the reconstructions based on the virtual dipole moment. We also present a combined reconstruction, which represents our best estimate of total solar irradiance for any given time during the Holocene. Conclusions: We present the first physics-based reconstruction of the total solar irradiance over the Holocene, which will be of interest for studies of climate change over the last 11 500 years. The reconstruction indicates that the decadally averaged total solar irradiance ranges over approximately 1.5 W/m2 from grand maxima to grand minima. Appendix A is available in electronic form at http://www.aanda.orgThe TSI data is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/531/A6

  5. MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z; Qi, H; Wu, S

    2016-06-15

    Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotationalmore » invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74%, respectively.« less

  6. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  7. An Accurate Framework for Arbitrary View Pedestrian Detection in Images

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Wen, G.; Qiu, S.

    2018-01-01

    We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.

  8. Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula

    NASA Astrophysics Data System (ADS)

    Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.

    2012-12-01

    Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.

  9. Forensic Science Technician

    ERIC Educational Resources Information Center

    Tech Directions, 2010

    2010-01-01

    Forensic science technicians, also called crime laboratory technicians or police science technicians, help solve crimes. They examine and identify physical evidence to reconstruct a crime scene. This article discusses everything students need to know about careers for forensic science technicians--wages, responsibilities, skills needed, career…

  10. Myths of Childhood.

    ERIC Educational Resources Information Center

    Paris, Joel

    This book calls into question the degree to which early childhood experiences affect psychological development, critiquing three related myths: (1) personality is formed by early childhood experiences; (2) mental disorders are caused by early childhood experiences; and (3) effective psychotherapy depends on reconstructing childhood experiences.…

  11. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  12. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  13. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  14. Expansion method in secondary total ear reconstruction for undesirable reconstructed ear.

    PubMed

    Liu, Tun; Hu, Jintian; Zhou, Xu; Zhang, Qingguo

    2014-09-01

    Ear reconstruction by autologous costal cartilage grafting is the most widely applied technique with fewer complications. However, undesirable ear reconstruction brings more problems to plastic surgeons. Some authors resort to free flap or osseointegration technique with prosthetic ear. In this article, we introduce a secondary total ear reconstruction with expanded skin flap method. From July 2010 to April 2012, 7 cases of undesirable ear reconstruction were repaired by tissue expansion method. Procedures including removal of previous cartilage framework, soft tissue expander insertion, and second stage of cartilage framework insertion were performed to each case regarding their local conditions. The follow-up time ranged from 6 months to 2.5 years. All of the cases recovered well with good 3-dimensional forms, symmetrical auriculocephalic angle, and stable fixation. All these evidence showed that this novel expansion method is safe, stable, and less traumatic for secondary total ear reconstruction. With sufficient expanded skin flap and refabricated cartilage framework, lifelike appearance of reconstructed ear could be acquired without causing additional injury.

  15. [Development and current situation of reconstruction methods following total sacrectomy].

    PubMed

    Huang, Siyi; Ji, Tao; Guo, Wei

    2018-05-01

    To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.

  16. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    NASA Astrophysics Data System (ADS)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  17. Free-style puzzle flap: the concept of recycling a perforator flap.

    PubMed

    Feng, Kuan-Ming; Hsieh, Ching-Hua; Jeng, Seng-Feng

    2013-02-01

    Theoretically, a flap can be supplied by any perforator based on the angiosome theory. In this study, the technique of free-style perforator flap dissection was used to harvest a pedicled or free skin flap from a previous free flap for a second difficult reconstruction. The authors call this a free-style puzzle flap. For the past 3 years, the authors treated 13 patients in whom 12 pedicled free-style puzzle flaps were harvested from previous redundant free flaps and recycled to reconstruct soft-tissue defects at various anatomical locations. One free-style free puzzle flap was harvested from a previous anterolateral thigh flap for buccal cancer to reconstruct a foot defect. Total flap survival was attained in 12 of 13 flaps. One transferred flap failed completely. This patient had received postoperative radiotherapy after the initial cancer ablation and free anterolateral thigh flap reconstruction. Another free flap was used to close and reconstruct the wound. All the donor sites could be closed primarily. The free-style puzzle flap, harvested from a previous redundant free flap and used as a perforator flap to reconstruct a new defect, has proven to be versatile and reliable. When indicated, it is an alternative donor site for further reconstruction of soft-tissue defects.

  18. Reconstruction of improvised explosive device blast loading to personnel in the open

    NASA Astrophysics Data System (ADS)

    Wiri, Suthee; Needham, Charles

    2016-05-01

    Significant advances in reconstructing attacks by improvised explosive devices (IEDs) and other blast events are reported. A high-fidelity three-dimensional computational fluid dynamics tool, called Second-order Hydrodynamic Automatic Mesh Refinement Code, was used for the analysis. Computer-aided design models for subjects or vehicles in the scene accurately represent geometries of objects in the blast field. A wide range of scenario types and blast exposure levels were reconstructed including free field blast, enclosed space of vehicle cabin, IED attack on a vehicle, buried charges, recoilless rifle operation, rocket-propelled grenade attack and missile attack with single subject or multiple subject exposure to pressure levels from ˜ 27.6 kPa (˜ 4 psi) to greater than 690 kPa (>100 psi). To create a full 3D pressure time-resolved reconstruction of a blast event for injury and blast exposure analysis, a combination of intelligence data and Blast Gauge data can be used to reconstruct an actual in-theatre blast event. The methodology to reconstruct an event and the "lessons learned" from multiple reconstructions in open space are presented. The analysis uses records of blast pressure at discrete points, and the output is a spatial and temporal blast load distribution for all personnel involved.

  19. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  20. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  1. Reconstruction of Cyber and Physical Software Using Novel Spread Method

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    Cyber and Physical software has been concerned for many years since 2010. Actually, many researchers would disagree with the deployment of traditional Spread Method for reconstruction of Cyber and physical software, which embodies the key principles reconstruction of cyber physical system. NSM(novel spread method), our new methodology for reconstruction of cyber and physical software, is the solution to all of these challenges.

  2. The algorithm of central axis in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang

    2017-09-01

    Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.

  3. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  4. Rapid construction of pinhole SPECT system matrices by distance-weighted Gaussian interpolation method combined with geometric parameter estimations

    NASA Astrophysics Data System (ADS)

    Lee, Ming-Wei; Chen, Yi-Chun

    2014-02-01

    In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.

  5. Maximum likelihood inference implies a high, not a low, ancestral haploid chromosome number in Araceae, with a critique of the bias introduced by ‘x’

    PubMed Central

    Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.

    2012-01-01

    Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850

  6. Partition-based acquisition model for speed up navigated beta-probe surface imaging

    NASA Astrophysics Data System (ADS)

    Monge, Frédéric; Shakir, Dzhoshkun I.; Navab, Nassir; Jannin, Pierre

    2016-03-01

    Although gross total resection in low-grade glioma surgery leads to a better patient outcome, the in-vivo control of resection borders remains challenging. For this purpose, navigated beta-probe systems combined with 18F-based radiotracer, relying on activity distribution surface estimation, have been proposed to generate reconstructed images. The clinical relevancy has been outlined by early studies where intraoperative functional information is leveraged although inducing low spatial resolution in reconstruction. To improve reconstruction quality, multiple acquisition models have been proposed. They involve the definition of attenuation matrix for designing radiation detection physics. Yet, they require high computational power for efficient intraoperative use. To address the problem, we propose a new acquisition model called Partition Model (PM) considering an existing model where coefficients of the matrix are taken from a look-up table (LUT). Our model is based upon the division of the LUT into averaged homogeneous values for assigning attenuation coefficients. We validated our model using in vitro datasets, where tumors and peri-tumoral tissues have been simulated. We compared our acquisition model with the o_-the-shelf LUT and the raw method. Acquisition models outperformed the raw method in term of tumor contrast (7.97:1 mean T:B) but with a difficulty of real-time use. Both acquisition models reached the same detection performance with references (0.8 mean AUC and 0.77 mean NCC), where PM slightly improves the mean tumor contrast up to 10.1:1 vs 9.9:1 with the LUT model and more importantly, it reduces the mean computation time by 7.5%. Our model gives a faster solution for an intraoperative use of navigated beta-probe surface imaging system, with improved image quality.

  7. Some Nonlinear Reconstruction Algorithms for Electrical Impedance Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J G

    2001-03-09

    An impedance camera [Henderson and Webster, 1978; Dines and Lytle, 1981]--or what is now more commonly called electrical impedance tomography--attempts to image the electrical impedance (or just the conductivity) distribution inside a body using electrical measurements on its boundary. The method has been used successfully in both biomedical [Brown, 1983; Barber and Brown, 1986; J. C. Newell, D. G. Gisser, and D. Isaacson, 1988; Webster, 1990] and geophysical applications [Wexler, Fry, and Neurnan, 1985; Daily, Lin, and Buscheck, 1987], but the analysis of optimal reconstruction algorithms is still progressing [Murai and Kagawa, 1985; Wexler, Fry, and Neurnan, 1985; Kohn andmore » Vogelius, 1987; Yorkey and Webster, 1987; Yorkey, Webster, and Tompkins, 1987; Berryman and Kohn, 1990; Kohn and McKenney, 1990; Santosa and Vogelius, 1990; Yorkey, 1990]. The most common application is monitoring the influx or efflux of a highly conducting fluid (such as brine in a porous rock or blood in the human body) through the volume being imaged. For biomedical applications, this met hod does not have the resolution of radiological methods, but it is comparatively safe and inexpensive and therefore provides a valuable alternative when continuous monitoring of a patient or process is desired. The following discussion is intended first t o summarize the physics of electrical impedance tomography, then to provide a few details of the data analysis and forward modeling requirements, and finally to outline some of the reconstruction algorithms that have proven to be most useful in practice. Pointers to the literature are provided throughout this brief narrative and the reader is encouraged to explore the references for more complete discussions of the various issues raised here.« less

  8. Quantitative damage imaging using Lamb wave diffraction tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Yan; Ruan, Min; Zhu, Wen-Fa; Chai, Xiao-Dong

    2016-12-01

    In this paper, we investigate the diffraction tomography for quantitative imaging damages of partly through-thickness holes with various shapes in isotropic plates by using converted and non-converted scattered Lamb waves generated numerically. Finite element simulations are carried out to provide the scattered wave data. The validity of the finite element model is confirmed by the comparison of scattering directivity pattern (SDP) of circle blind hole damage between the finite element simulations and the analytical results. The imaging method is based on a theoretical relation between the one-dimensional (1D) Fourier transform of the scattered projection and two-dimensional (2D) spatial Fourier transform of the scattering object. A quantitative image of the damage is obtained by carrying out the 2D inverse Fourier transform of the scattering object. The proposed approach employs a circle transducer network containing forward and backward projections, which lead to so-called transmission mode (TMDT) and reflection mode diffraction tomography (RMDT), respectively. The reconstructed results of the two projections for a non-converted S0 scattered mode are investigated to illuminate the influence of the scattering field data. The results show that Lamb wave diffraction tomography using the combination of TMDT and RMDT improves the imaging effect compared with by using only the TMDT or RMDT. The scattered data of the converted A0 mode are also used to assess the performance of the diffraction tomography method. It is found that the circle and elliptical shaped damages can still be reasonably identified from the reconstructed images while the reconstructed results of other complex shaped damages like crisscross rectangles and racecourse are relatively poor. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474195, 11274226, 11674214, and 51478258).

  9. The structure of reconstructed chalcopyrite surfaces

    NASA Astrophysics Data System (ADS)

    Thinius, Sascha; Islam, Mazharul M.; Bredow, Thomas

    2018-03-01

    Chalcopyrite (CuFeS2) surfaces are of major interest for copper exploitation in aqueous solution, called leaching. Since leaching is a surface process knowledge of the surface structure, bonding pattern and oxidation states is important for improving the efficiency. At present such information is not available from experimental studies. Therefore a detailed computational study of chalcopyrite surfaces is performed. The structures of low-index stoichiometric chalcopyrite surfaces {hkl} h, k, l ∈ {0, 1, 2} have been studied with density functional theory (DFT) and global optimization strategies. We have applied ab initio molecular dynamics (MD) in combination with simulated annealing (SA) in order to explore possible reconstructions via a minima hopping (MH) algorithm. In almost all cases reconstruction involving substantial rearrangement has occurred accompanied by reduction of the surface energy. The analysis of the change in the coordination sphere and migration during reconstruction reveals that S-S dimers are formed on the surface. Further it was observed that metal atoms near the surface move toward the bulk forming metal alloys passivated by sulfur. The obtained surface energies of reconstructed surfaces are in the range of 0.53-0.95 J/m2.

  10. 'Reverse expansion': A new technique of breast reconstruction with autologous tissue.

    PubMed

    Fabiocchi, L; Semprini, G; Cattin, F; Dellachiesa, L; Fogacci, T; Frisoni, G; Samorani, D

    2017-11-01

    The treatment for breast cancer is sometimes long and requires a multidisciplinary approach. In 2010, in our centre, we began to perform fat grafting for breast reconstruction using the so-called 'reverse expansion' technique. This consists of the insertion of a skin expander during mastectomy, in its expansion and then in its gradual deflation in the surgical theatre during fat grafting. We performed a complete breast reconstruction in 57 patients by reverse expansion. We harvested fat from the fat excess areas using a normal liposuction cannula. From each patient, an average of 640 ccs of was collected and then centrifuged in a 4000-rpm centrifuge for 3 min. The obtained adipocytes were then injected in the operated breast using a normal lipofilling cannula. We injected an average of 318.05 ccs of adipocytes for each patient each time. The average number of sessions per patient was 3.6. Reverse expansion can be a safe and effective technique for breast reconstruction in all the breast cancer patients. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. A reconstruction of global hydroclimate and dynamical variables over the Common Era.

    PubMed

    Steiger, Nathan J; Smerdon, Jason E; Cook, Edward R; Cook, Benjamin I

    2018-05-22

    Hydroclimate extremes critically affect human and natural systems, but there remain many unanswered questions about their causes and how to interpret their dynamics in the past and in climate change projections. These uncertainties are due, in part, to the lack of long-term, spatially resolved hydroclimate reconstructions and information on the underlying physical drivers for many regions. Here we present the first global reconstructions of hydroclimate and associated climate dynamical variables over the past two thousand years. We use a data assimilation approach tailored to reconstruct hydroclimate that optimally combines 2,978 paleoclimate proxy-data time series with the physical constraints of an atmosphere-ocean climate model. The global reconstructions are annually or seasonally resolved and include two spatiotemporal drought indices, near-surface air temperature, an index of North Atlantic variability, the location of the intertropical convergence zone, and monthly Niño indices. This database, called the Paleo Hydrodynamics Data Assimilation product (PHYDA), will provide a critical new platform for investigating the causes of past climate variability and extremes, while informing interpretations of future hydroclimate projections.

  12. ACL reconstruction

    MedlinePlus

    ... Your hamstring are the muscles behind your knee. Tissue taken from a donor is called an allograft. The procedure is usually performed with the help of knee arthroscopy . With arthroscopy, a tiny camera is inserted into ... ligaments and other tissues of your knee. Your surgeon will make other ...

  13. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    PubMed

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  14. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  15. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  16. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    NASA Astrophysics Data System (ADS)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  17. 360° Fourier transform profilometry in surface reconstruction for fluorescence molecular tomography.

    PubMed

    Shi, Bi'er; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-05-01

    Fluorescence molecular tomography (FMT) is an emerging tool in the observation of diseases. A fast and accurate surface reconstruction of the experimental object is needed as a boundary constraint for FMT reconstruction. In this paper, an automatic, noncontact, and 3-D surface reconstruction method named 360◦ Fourier transform profilometry (FTP) is proposed to reconstruct 3-D surface profiles for FMT system. This method can reconstruct 360◦ integrated surface profiles utilizing the single-frame FTP at different angles. Results show that the relative mean error of the surface reconstruction of this method is less than 1.4% in phantom experiments, and is no more than 2.9% in mouse experiments in vivo. Compared with the Radon transform method, the proposed method reduces the computation time by more than 90% with a minimal error increase. At last, a combined 360◦ FTP/FMT experiment is conducted on a nude mouse. Not only can the 360◦ FTP system operate with the FMT system simultaneously, but it can also help to monitor the status of animals. Moreover, the 360◦ FTP system is independent of FMT system and can be performed to reconstruct the surface by itself.

  18. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  19. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    NASA Astrophysics Data System (ADS)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  20. Reconstruction After Hemipelvectomy With the Ice-Cream Cone Prosthesis: What Are the Short-term Clinical Results?

    PubMed

    Barrientos-Ruiz, Irene; Ortiz-Cruz, Eduardo José; Peleteiro-Pensado, Manuel

    2017-03-01

    Reconstruction after internal hemipelvectomy resection likely provides better function than hindquarter amputation. However, many reconstruction methods have been used, complications with these approaches are common, and function often is poor; because of these issues, it seems important to investigate alternative implants and surgical techniques. The purposes of this study were (1) to identify the frequency of surgical site complications and infection associated with the use of the Ice-Cream Cone prosthesis for reconstruction after hemipelvectomy for oncological indications; (2) to evaluate the Musculoskeletal Tumor Society (MSTS) outcomes scores in a small group of patients treated with this implant in the short term; and (3) to quantify the surgical margins and frequency of local recurrence in the short term in this group of patients. Between 2008 and 2013, one center performed a total of 27 internal hemipelvectomies for oncological indications. Of those, 23 (85%) were treated with reconstruction. Our general indications for reconstruction were patients whose pelvic stability was affected by the resection and whose general condition was sufficiently strong to tolerate the reconstructive procedure. Of those patients undergoing reconstruction, 14 (61%) were treated with an Ice-Cream Cone-style implant (Coned ® ; Stanmore Worldwide Ltd, Elstree, UK; and Socincer ® custom-made implant for the pelvis, Gijón, Spain), whereas nine others were treated with other implants or allografts. The indications during this time for using the Ice-Cream Cone implant were pelvic tumors affecting the periacetabular area without iliac wing involvement. Of those 14, 10 were available for followup at a minimum of 2 years (median, 3 years; range, 2-5 years) unless a study endpoint (wound complication, infection, or local recurrence) was observed earlier. Study endpoints were ascertained by chart review performed by one of the authors. Surgical site complications occurred in five patients. Of those, two developed superficial infections with necrosis, two developed deep infections, and one patient developed wound necrosis without apparent infection. No prostheses were removed as a result of these complications [corrected]. Median MSTS score was 19 out of 30 when 0 is the worst possible result and 30 a perfect function and emotional status. Five of seven primary tumors had wide margin surgery and three of seven developed local recurrences by the end of the followup. Pelvic reconstruction with the Ice-Cream Cone prosthesis yielded fair functional results at short-term followup. Longer term surveillance is called for to see whether this implant will represent an improvement over available reconstructive alternatives such as allograft, custom-made implants, and saddle prostheses. We are cautiously optimistic and continue to use this implant when we need to reconstruct the periacetabular area in patients without Enneking Zone 1 involvement. Level IV, therapeutic study.

  1. Nonnegative definite EAP and ODF estimation via a unified multi-shell HARDI reconstruction.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid

    2012-01-01

    In High Angular Resolution Diffusion Imaging (HARDI), Orientation Distribution Function (ODF) and Ensemble Average Propagator (EAP) are two important Probability Density Functions (PDFs) which reflect the water diffusion and fiber orientations. Spherical Polar Fourier Imaging (SPFI) is a recent model-free multi-shell HARDI method which estimates both EAP and ODF from the diffusion signals with multiple b values. As physical PDFs, ODFs and EAPs are nonnegative definite respectively in their domains S2 and R3. However, existing ODF/EAP estimation methods like SPFI seldom consider this natural constraint. Although some works considered the nonnegative constraint on the given discrete samples of ODF/EAP, the estimated ODF/EAP is not guaranteed to be nonnegative definite in the whole continuous domain. The Riemannian framework for ODFs and EAPs has been proposed via the square root parameterization based on pre-estimated ODFs and EAPs by other methods like SPFI. However, there is no work on how to estimate the square root of ODF/EAP called as the wavefuntion directly from diffusion signals. In this paper, based on the Riemannian framework for ODFs/EAPs and Spherical Polar Fourier (SPF) basis representation, we propose a unified model-free multi-shell HARDI method, named as Square Root Parameterized Estimation (SRPE), to simultaneously estimate both the wavefunction of EAPs and the nonnegative definite ODFs and EAPs from diffusion signals. The experiments on synthetic data and real data showed SRPE is more robust to noise and has better EAP reconstruction than SPFI, especially for EAP profiles at large radius.

  2. Method of Breast Reconstruction Determines Venous Thromboembolism Risk Better Than Current Prediction Models

    PubMed Central

    Patel, Niyant V.; Wagner, Douglas S.

    2015-01-01

    Background: Venous thromboembolism (VTE) risk models including the Davison risk score and the 2005 Caprini risk assessment model have been validated in plastic surgery patients. However, their utility and predictive value in breast reconstruction has not been well described. We sought to determine the utility of current VTE risk models in this population and the VTE rate observed in various methods of breast reconstruction. Methods: A retrospective review of breast reconstructions by a single surgeon was performed. One hundred consecutive transverse rectus abdominis myocutaneous (TRAM) patients, 100 consecutive implant patients, and 100 consecutive latissimus dorsi patients were identified over a 10-year period. Patient demographics and presence of symptomatic VTE were collected. 2005 Caprini risk scores and Davison risk scores were calculated for each patient. Results: The TRAM reconstruction group was found to have a higher VTE rate (6%) than the implant (0%) and latissimus (0%) reconstruction groups (P < 0.01). Mean Davison risk scores and 2005 Caprini scores were similar across all reconstruction groups (P > 0.1). The vast majority of patients were stratified as high risk (87.3%) by the VTE risk models. However, only TRAM reconstruction patients demonstrated significant VTE risk. Conclusions: TRAM reconstruction appears to have a significantly higher risk of VTE than both implant and latissimus reconstruction. Current risk models do not effectively stratify breast reconstruction patients at risk for VTE. The method of breast reconstruction appears to have a significant role in patients’ VTE risk. PMID:26090287

  3. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    EPA Science Inventory

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...

  4. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  5. Restoration of virginity: women's demand and health care providers' response in Switzerland.

    PubMed

    Tschudin, Sibil; Schuster, Sylvie; Dumont dos Santos, Denise; Huang, Dorothy; Bitzer, Johannes; Leeners, Brigitte

    2013-09-01

    As a result of transnational migration, health institutions are faced with growing demand for "restoration" of virginity. The practice of hymen reconstruction constitutes a challenge for health care providers in medical, ethical, judicial, social, and cultural dimensions, for which they are not well prepared. The aim of the presented nationwide survey was to investigate the experience of Swiss gynecologists with women requesting hymen reconstruction. A questionnaire specifically designed for this purpose was sent to 100 public hospitals. Main outcome measures included demands for (number of requests, origin of women) and attitudes toward hymen reconstruction (requests granted, decision-making for or against intervention, surgical technique applied, problems associated with the requests for hymen repair, cost coverage, need for further information) in Switzerland. The response rate was 68%. Of the 43 clinics (63.2%) confronted with requests for hymen reconstruction, 38 (90.5%) claimed to see up to five patients per year. The predominantly mentioned countries of origin were Turkey in the German-speaking part and Arab countries in the French-speaking part. More than half of the clinics (27/64.3%) reported that they always (12/28.6%) or mostly (15/35.7%) granted the request. Decision for surgery was made after intensive counseling in 44.2% and on demand of the patient after brief counseling in 32.7%. The so-called approximation method was the most frequently applied surgical technique. A third of the participants (19/35.2%) reported problems with confidentiality. More than half of the clinics expressed their need for further information on this topic. Hymen reconstruction is rarely performed in Switzerland, even though two-thirds of the responding hospitals are confronted with this issue several times per year. No guidelines exist on how health professionals should deal with these requests. Interdisciplinary research on how to meet the needs of women and health care providers in such cross-cultural encounters is needed. © 2013 International Society for Sexual Medicine.

  6. Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks.

    PubMed

    Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2017-05-01

    Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Optimizing and evaluating the reconstruction of Metagenome-assembled microbial genomes.

    PubMed

    Papudeshi, Bhavya; Haggerty, J Matthew; Doane, Michael; Morris, Megan M; Walsh, Kevin; Beattie, Douglas T; Pande, Dnyanada; Zaeri, Parisa; Silva, Genivaldo G Z; Thompson, Fabiano; Edwards, Robert A; Dinsdale, Elizabeth A

    2017-11-28

    Microbiome/host interactions describe characteristics that affect the host's health. Shotgun metagenomics includes sequencing a random subset of the microbiome to analyze its taxonomic and metabolic potential. Reconstruction of DNA fragments into genomes from metagenomes (called metagenome-assembled genomes) assigns unknown fragments to taxa/function and facilitates discovery of novel organisms. Genome reconstruction incorporates sequence assembly and sorting of assembled sequences into bins, characteristic of a genome. However, the microbial community composition, including taxonomic and phylogenetic diversity may influence genome reconstruction. We determine the optimal reconstruction method for four microbiome projects that had variable sequencing platforms (IonTorrent and Illumina), diversity (high or low), and environment (coral reefs and kelp forests), using a set of parameters to select for optimal assembly and binning tools. We tested the effects of the assembly and binning processes on population genome reconstruction using 105 marine metagenomes from 4 projects. Reconstructed genomes were obtained from each project using 3 assemblers (IDBA, MetaVelvet, and SPAdes) and 2 binning tools (GroopM and MetaBat). We assessed the efficiency of assemblers using statistics that including contig continuity and contig chimerism and the effectiveness of binning tools using genome completeness and taxonomic identification. We concluded that SPAdes, assembled more contigs (143,718 ± 124 contigs) of longer length (N50 = 1632 ± 108 bp), and incorporated the most sequences (sequences-assembled = 19.65%). The microbial richness and evenness were maintained across the assembly, suggesting low contig chimeras. SPAdes assembly was responsive to the biological and technological variations within the project, compared with other assemblers. Among binning tools, we conclude that MetaBat produced bins with less variation in GC content (average standard deviation: 1.49), low species richness (4.91 ± 0.66), and higher genome completeness (40.92 ± 1.75) across all projects. MetaBat extracted 115 bins from the 4 projects of which 66 bins were identified as reconstructed metagenome-assembled genomes with sequences belonging to a specific genus. We identified 13 novel genomes, some of which were 100% complete, but show low similarity to genomes within databases. In conclusion, we present a set of biologically relevant parameters for evaluation to select for optimal assembly and binning tools. For the tools we tested, SPAdes assembler and MetaBat binning tools reconstructed quality metagenome-assembled genomes for the four projects. We also conclude that metagenomes from microbial communities that have high coverage of phylogenetically distinct, and low taxonomic diversity results in highest quality metagenome-assembled genomes.

  8. Blind deconvolution of 2-D and 3-D fluorescent micrographs

    NASA Astrophysics Data System (ADS)

    Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.

    1992-06-01

    This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.

  9. Modes of uncontrolled rotational motion of the Progress M-29M spacecraft

    NASA Astrophysics Data System (ADS)

    Belyaev, M. Yu.; Matveeva, T. V.; Monakhov, M. I.; Rulev, D. N.; Sazonov, V. V.

    2018-01-01

    We have reconstructed the uncontrolled rotational motion of the Progress M-29M transport cargo spacecraft in the single-axis solar orientation mode (the so-called sunward spin) and in the mode of the gravitational orientation of a rotating satellite. The modes were implemented on April 3-7, 2016 as a part of preparation for experiments with the DAKON convection sensor onboard the Progress spacecraft. The reconstruction was performed by integral statistical techniques using the measurements of the spacecraft's angular velocity and electric current from its solar arrays. The measurement data obtained in a certain time interval have been jointly processed using the least-squares method by integrating the equations of the spacecraft's motion relative to the center of mass. As a result of processing, the initial conditions of motion and parameters of the mathematical model have been estimated. The motion in the sunward spin mode is the rotation of the spacecraft with an angular velocity of 2.2 deg/s about the normal to the plane of solar arrays; the normal is oriented toward the Sun or forms a small angle with this direction. The duration of the mode is several orbit passes. The reconstruction has been performed over time intervals of up to 1 h. As a result, the actual rotational motion of the spacecraft relative to the Earth-Sun direction was obtained. In the gravitational orientation mode, the spacecraft was rotated about its longitudinal axis with an angular velocity of 0.1-0.2 deg/s; the longitudinal axis executed small oscillated relative to the local vertical. The reconstruction of motion relative to the orbital coordinate system was performed in time intervals of up to 7 h using only the angularvelocity measurements. The measurements of the electric current from solar arrays were used for verification.

  10. Joint 6D k-q Space Compressed Sensing for Accelerated High Angular Resolution Diffusion MRI.

    PubMed

    Cheng, Jian; Shen, Dinggang; Basser, Peter J; Yap, Pew-Thian

    2015-01-01

    High Angular Resolution Diffusion Imaging (HARDI) avoids the Gaussian. diffusion assumption that is inherent in Diffusion Tensor Imaging (DTI), and is capable of characterizing complex white matter micro-structure with greater precision. However, HARDI methods such as Diffusion Spectrum Imaging (DSI) typically require significantly more signal measurements than DTI, resulting in prohibitively long scanning times. One of the goals in HARDI research is therefore to improve estimation of quantities such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF) with a limited number of diffusion-weighted measurements. A popular approach to this problem, Compressed Sensing (CS), affords highly accurate signal reconstruction using significantly fewer (sub-Nyquist) data points than required traditionally. Existing approaches to CS diffusion MRI (CS-dMRI) mainly focus on applying CS in the q-space of diffusion signal measurements and fail to take into consideration information redundancy in the k-space. In this paper, we propose a framework, called 6-Dimensional Compressed Sensing diffusion MRI (6D-CS-dMRI), for reconstruction of the diffusion signal and the EAP from data sub-sampled in both 3D k-space and 3D q-space. To our knowledge, 6D-CS-dMRI is the first work that applies compressed sensing in the full 6D k-q space and reconstructs the diffusion signal in the full continuous q-space and the EAP in continuous displacement space. Experimental results on synthetic and real data demonstrate that, compared with full DSI sampling in k-q space, 6D-CS-dMRI yields excellent diffusion signal and EAP reconstruction with low root-mean-square error (RMSE) using 11 times less samples (3-fold reduction in k-space and 3.7-fold reduction in q-space).

  11. Web-based volume slicer for 3D electron-microscopy data from EMDB

    PubMed Central

    Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J.; Patwardhan, Ardan

    2016-01-01

    We describe the functionality and design of the Volume slicer – a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale. PMID:26876163

  12. Real-time implementing wavefront reconstruction for adaptive optics

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan

    2004-12-01

    The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.

  13. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  14. Noise robustness of a combined phase retrieval and reconstruction method for phase-contrast tomography.

    PubMed

    Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian

    2016-04-01

    Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.

  15. 42 CFR 82.2 - What are the basics of dose reconstruction?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE... this exposure environment. Then methods are applied to translate exposure to radiation into quantified... workers. A hierarchy of methods is used in a dose reconstruction, depending on the nature of the exposure...

  16. 42 CFR 82.2 - What are the basics of dose reconstruction?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... SAFETY AND HEALTH RESEARCH AND RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE... this exposure environment. Then methods are applied to translate exposure to radiation into quantified... workers. A hierarchy of methods is used in a dose reconstruction, depending on the nature of the exposure...

  17. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  18. Computerized planning of prostate cryosurgery using variable cryoprobe insertion depth.

    PubMed

    Rossi, Michael R; Tanaka, Daigo; Shimada, Kenji; Rabin, Yoed

    2010-02-01

    The current study presents a computerized planning scheme for prostate cryosurgery using a variable insertion depth strategy. This study is a part of an ongoing effort to develop computerized tools for cryosurgery. Based on typical clinical practices, previous automated planning schemes have required that all cryoprobes be aligned at a single insertion depth. The current study investigates the benefit of removing this constraint, in comparison with results based on uniform insertion depth planning as well as the so-called "pullback procedure". Planning is based on the so-called "bubble-packing method", and its quality is evaluated with bioheat transfer simulations. This study is based on five 3D prostate models, reconstructed from ultrasound imaging, and cryoprobe active length in the range of 15-35 mm. The variable insertion depth technique is found to consistently provide superior results when compared to the other placement methods. Furthermore, it is shown that both the optimal active length and the optimal number of cryoprobes vary among prostate models, based on the size and shape of the target region. Due to its low computational cost, the new scheme can be used to determine the optimal cryoprobe layout for a given prostate model in real time. Copyright 2008 Elsevier Inc. All rights reserved.

  19. General phase regularized reconstruction using phase cycling.

    PubMed

    Ong, Frank; Cheng, Joseph Y; Lustig, Michael

    2018-07-01

    To develop a general phase regularized image reconstruction method, with applications to partial Fourier imaging, water-fat imaging and flow imaging. The problem of enforcing phase constraints in reconstruction was studied under a regularized inverse problem framework. A general phase regularized reconstruction algorithm was proposed to enable various joint reconstruction of partial Fourier imaging, water-fat imaging and flow imaging, along with parallel imaging (PI) and compressed sensing (CS). Since phase regularized reconstruction is inherently non-convex and sensitive to phase wraps in the initial solution, a reconstruction technique, named phase cycling, was proposed to render the overall algorithm invariant to phase wraps. The proposed method was applied to retrospectively under-sampled in vivo datasets and compared with state of the art reconstruction methods. Phase cycling reconstructions showed reduction of artifacts compared to reconstructions without phase cycling and achieved similar performances as state of the art results in partial Fourier, water-fat and divergence-free regularized flow reconstruction. Joint reconstruction of partial Fourier + water-fat imaging + PI + CS, and partial Fourier + divergence-free regularized flow imaging + PI + CS were demonstrated. The proposed phase cycling reconstruction provides an alternative way to perform phase regularized reconstruction, without the need to perform phase unwrapping. It is robust to the choice of initial solutions and encourages the joint reconstruction of phase imaging applications. Magn Reson Med 80:112-125, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  1. Aggregative Learning Method and Its Application for Communication Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Akhmetov, Dauren F.; Kotaki, Minoru

    2007-12-01

    In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.

  2. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  3. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities

    DOE PAGES

    Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...

    2015-01-01

    Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less

  4. Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.

    2016-05-01

    Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.

  5. Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI.

    PubMed

    Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi

    2009-01-01

    Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.

  6. A high order compact least-squares reconstructed discontinuous Galerkin method for the steady-state compressible flows on hybrid grids

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Zhang, Fan; Liu, Tiegang

    2018-06-01

    In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.

  7. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  8. Charm: Cosmic history agnostic reconstruction method

    NASA Astrophysics Data System (ADS)

    Porqueres, Natalia; Ensslin, Torsten A.

    2017-03-01

    Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.

  9. A novel mechanochemical method for reconstructing the moisture-degraded HKUST-1.

    PubMed

    Sun, Xuejiao; Li, Hao; Li, Yujie; Xu, Feng; Xiao, Jing; Xia, Qibin; Li, Yingwei; Li, Zhong

    2015-07-11

    A novel mechanochemical method was proposed to reconstruct quickly moisture-degraded HKUST-1. The degraded HKUST-1 can be restored within minutes. The reconstructed samples were characterized, and confirmed to have 95% surface area and 92% benzene capacity of the fresh HKUST-1. It is a simple and effective strategy for degraded MOF reconstruction.

  10. MO-DE-207A-08: Four-Dimensional Cone-Beam CT Iterative Reconstruction with Time-Ordered Chain Graph Model for Non-Periodic Organ Motion and Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, M; Haga, A; Hanaoka, S

    2016-06-15

    Purpose: The purpose of this study is to propose a new concept of four-dimensional (4D) cone-beam CT (CBCT) reconstruction for non-periodic organ motion using the Time-ordered Chain Graph Model (TCGM), and to compare the reconstructed results with the previously proposed methods, the total variation-based compressed sensing (TVCS) and prior-image constrained compressed sensing (PICCS). Methods: CBCT reconstruction method introduced in this study consisted of maximum a posteriori (MAP) iterative reconstruction combined with a regularization term derived from a concept of TCGM, which includes a constraint coming from the images of neighbouring time-phases. The time-ordered image series were concurrently reconstructed in themore » MAP iterative reconstruction framework. Angular range of projections for each time-phase was 90 degrees for TCGM and PICCS, and 200 degrees for TVCS. Two kinds of projection data, an elliptic-cylindrical digital phantom data and two clinical patients’ data, were used for reconstruction. The digital phantom contained an air sphere moving 3 cm along longitudinal axis, and temporal resolution of each method was evaluated by measuring the penumbral width of reconstructed moving air sphere. The clinical feasibility of non-periodic time-ordered 4D CBCT reconstruction was also examined using projection data of prostate cancer patients. Results: The results of reconstructed digital phantom shows that the penumbral widths of TCGM yielded the narrowest result; PICCS and TCGM were 10.6% and 17.4% narrower than that of TVCS, respectively. This suggests that the TCGM has the better temporal resolution than the others. Patients’ CBCT projection data were also reconstructed and all three reconstructed results showed motion of rectal gas and stool. The result of TCGM provided visually clearer and less blurring images. Conclusion: The present study demonstrates that the new concept for 4D CBCT reconstruction, TCGM, combined with MAP iterative reconstruction framework enables time-ordered image reconstruction with narrower time-window.« less

  11. The transmission/disequilibrium test and parental-genotype reconstruction: the reconstruction-combined transmission/ disequilibrium test.

    PubMed Central

    Knapp, M

    1999-01-01

    Spielman and Ewens recently proposed a method for testing a marker for linkage with a disease, which combines data from families with and without information on parental genotypes. For some families without parental-genotype information, it may be possible to reconstruct missing parental genotypes from the genotypes of their offspring. The treatment of such a reconstructed family as if parental genotypes have been typed, however, can introduce bias. In the present study, a new method is presented that employs parental-genotype reconstruction and corrects for the biases resulting from reconstruction. The results of an application of this method to a real data set and of a simulation study suggest that this approach may increase the power to detect linkage. PMID:10053021

  12. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  14. The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.

    2007-06-01

    We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.

  15. Painlevé equations, topological type property and reconstruction by the topological recursion

    NASA Astrophysics Data System (ADS)

    Iwaki, K.; Marchal, O.; Saenz, A.

    2018-01-01

    In this article we prove that Lax pairs associated with ħ-dependent six Painlevé equations satisfy the topological type property proposed by Bergère, Borot and Eynard for any generic choice of the monodromy parameters. Consequently we show that one can reconstruct the formal ħ-expansion of the isomonodromic τ-function and of the determinantal formulas by applying the so-called topological recursion to the spectral curve attached to the Lax pair in all six Painlevé cases. Finally we illustrate the former results with the explicit computations of the first orders of the six τ-functions.

  16. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    2016-06-15

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity inmore » local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time and robust surface reconstruction method on point clouds acquired by photogrammetry systems. It serves an important enabling step for real-time motion tracking in radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  17. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  18. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  20. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  1. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  2. The compression and storage method of the same kind of medical images: DPCM

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.

  3. Boosting probabilistic graphical model inference by incorporating prior knowledge from multiple sources.

    PubMed

    Praveen, Paurush; Fröhlich, Holger

    2013-01-01

    Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.

  4. Analysis of impulse signals with Hylaty ELF station

    NASA Astrophysics Data System (ADS)

    Kulak, A.; Mlynarczyk, J.; Ostrowski, M.; Kubisz, J.; Michalec, A.

    2012-04-01

    Lighting discharges generate electromagnetic field pulses that propagate in the Earth-ionosphere waveguide. The attenuation in the ELF range is so small that the pulses originating from strong atmospheric discharges can be observed even several thousand kilometers away from the individual discharge. The recorded waveform depends on the discharge process, the Earth-ionosphere waveguide properties on the source-receiver path, and the transfer function of the receiver. If the distance from the source is known, an inverse method can be used for reconstructing the current moment waveform and the charge moment of the discharge. In order to reconstruct the source parameters from the recorded signal a reliable model of the radio wave propagation in the Earth-ionosphere waveguide as well as practical signal processing techniques are necessary. We present two methods, both based on analytical formulas. The first method allows for fast calculation of the charge moment of relatively short atmospheric discharges. It is based on peak amplitude measurement of the recorded magnetic component of the ELF EM field and it takes into account the receiver characteristics. The second method, called "inverse channel method" allows reconstructing the complete current moment waveform of strong atmospheric discharges that exhibit the continuing current phase, such as Gigantic Jets and Sprites. The method makes it possible to fully remove from the observed waveform the distortions related to the receiver's impulse response as well as the influence of the Earth-ionosphere propagation channel. Our ELF station is equipped with two magnetic antennas for Bx and By components measurement in the 0.03 to 55 Hz frequency range. ELF Data recording is carried out since 1993, with continuous data acquisition since 2005. The station features low noise level and precise timing. It is battery powered and located in the sparsely populated area, far from major electric power lines, which results in high quality signal recordings and allows for precise calculations of the charge moments of upward discharges and strong cloud-to-ground discharges originating from distant sources. The same data is used for Schumann resonance observation. We demonstrate the use of our methods based on recent recordings from the Hylaty ELF station. We include examples of GJ (Gigantic Jet) and TGF (Terrestrial Gamma-ray Flash) related discharges.

  5. An Improved DINEOF Algorithm for Filling Missing Values in Spatio-Temporal Sea Surface Temperature Data.

    PubMed

    Ping, Bo; Su, Fenzhen; Meng, Yunshan

    2016-01-01

    In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.

  6. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  7. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  8. Quantum secret sharing using orthogonal multiqudit entangled states

    NASA Astrophysics Data System (ADS)

    Bai, Chen-Ming; Li, Zhi-Hui; Liu, Cheng-Ji; Li, Yong-Ming

    2017-12-01

    In this work, we investigate the distinguishability of orthogonal multiqudit entangled states under restricted local operations and classical communication. According to these properties, we propose a quantum secret sharing scheme to realize three types of access structures, i.e., the ( n, n)-threshold, the restricted (3, n)-threshold and restricted (4, n)-threshold schemes (called LOCC-QSS scheme). All cooperating players in the restricted threshold schemes are from two disjoint groups. In the proposed protocol, the participants use the computational basis measurement and classical communication to distinguish between those orthogonal states and reconstruct the original secret. Furthermore, we also analyze the security of our scheme in four primary quantum attacks and give a simple encoding method in order to better prevent the participant conspiracy attack.

  9. Some aspects of cool main sequence star ages derived from stellar rotation (gyrochronology)

    NASA Astrophysics Data System (ADS)

    Barnes, S. A.; Spada, F.; Weingrill, J.

    2016-09-01

    Rotation periods for cool stars can be measured with good precision by monitoring starspot light modulation. Observations have shown that the rotation periods of dwarf stars of roughly solar metallicity have such systematic dependencies on stellar age and mass that they can be used to derive reliable ages, a procedure called gyrochronology. We review the method and show illustrative cases, including recent ground- and space-based data. The age uncertainties approach 10 % in the best cases, making them a valuable complement to, and constraint on, asteroseismic or other ages. Edited, updated, and refereed version of a presentation at the WE-Heraeus-Seminar in Bad Honnef, Germany: Reconstructing the Milky Way's History: Spectroscopic Surveys, Asteroseismology and Chemodynamical Models

  10. A density based algorithm to detect cavities and holes from planar points

    NASA Astrophysics Data System (ADS)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  11. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  12. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparsemore » regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.« less

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    PubMed Central

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. Conclusions: The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications. PMID:27147347

  14. A tale of two arcs? Plate tectonics of the Izu-Bonin-Mariana (IBM) arc using subducted slab constraints

    NASA Astrophysics Data System (ADS)

    Wu, J. E.; Suppe, J.; Renqi, L.; Kanda, R. V. S.

    2014-12-01

    Published plate reconstructions typically show the Izu-Bonin Marianas arc (IBM) forming as a result of long-lived ~50 Ma Pacific subduction beneath the Philippine Sea. These reconstructions rely on the critical assumption that the Philippine Sea was continuously coupled to the Pacific during the lifetime of the IBM arc. Because of this assumption, significant (up to 1500 km) Pacific trench retreat is required to accommodate the 2000 km of Philippine Sea/IBM northward motion since the Eocene that is constrained by paleomagnetic data. In this study, we have mapped subducted slabs of mantle lithosphere from MITP08 global seismic tomography (Li et al., 2008) and restored them to a model Earth surface to constrain plate tectonic reconstructions. Here we present two subducted slab constraints that call into question current IBM arc reconstructions: 1) The northern and central Marianas slabs form a sub-vertical 'slab wall' down to maximum 1500 km depths in the lower mantle. This slab geometry is best explained by a near-stationary Marianas trench that has remained +/- 250 km E-W of its present-day position since ~45 Ma, and does not support any significant Pacific slab retreat. 2) A vanished ocean is revealed by an extensive swath of sub-horizontal slabs at 700 to 1000 km depths in the lower mantle below present-day Philippine Sea to Papua New Guinea. We call this vanished ocean the 'East Asian Sea'. When placed in an Eocene plate reconstruction, the East Asian Sea fits west of the reconstructed Marianas Pacific trench position and north of the Philippine Sea plate. This implies that the Philippine Sea and Pacific were not adjacent at IBM initiation, but were in fact separated by a lost ocean. Here we propose a new IBM arc reconstruction constrained by subducted slabs mapped under East Asia. At ~50 Ma, the present-day IBM arc initiated at equatorial latitudes from East Asian Sea subduction below the Philippine Sea. A separate arc was formed from Pacific subduction below the East Asian Sea. The Philippine Sea plate moved northwards, overrunning the East Asian Sea and the two arcs collided between 15 to 20 Ma. From 15 Ma to the present, IBM arc magmatism was produced by Pacific subduction beneath the Philippine Sea.

  15. Ancestral sequence reconstruction in primate mitochondrial DNA: compositional bias and effect on functional inference.

    PubMed

    Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D

    2004-10-01

    Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.

  16. Apparatus And Method For Reconstructing Data Using Cross-Parity Stripes On Storage Media

    DOEpatents

    Hughes, James Prescott

    2003-06-17

    An apparatus and method for reconstructing missing data using cross-parity stripes on a storage medium is provided. The apparatus and method may operate on data symbols having sizes greater than a data bit. The apparatus and method makes use of a plurality of parity stripes for reconstructing missing data stripes. The parity symbol values in the parity stripes are used as a basis for determining the value of the missing data symbol in a data stripe. A correction matrix is shifted along the data stripes, correcting missing data symbols as it is shifted. The correction is performed from the outside data stripes towards the inner data stripes to thereby use previously reconstructed data symbols to reconstruct other missing data symbols.

  17. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy

    DOE PAGES

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    2017-10-25

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less

  18. Light Microscopy at Maximal Precision

    NASA Astrophysics Data System (ADS)

    Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.

    2017-10-01

    Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  19. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy.

    PubMed

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    2017-01-01

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .

  20. A streaming multi-GPU implementation of image simulation algorithms for scanning transmission electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pryor, Alan; Ophus, Colin; Miao, Jianwei

    Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less

  1. Identification of the Structure Model of the Si(111)-(5×2)-Au Surface

    NASA Astrophysics Data System (ADS)

    Shirasawa, Tetsuroh; Voegeli, Wolfgang; Nojima, Takehiro; Iwasawa, Yusaku; Yamaguchi, Yudai; Takahashi, Toshio

    2014-10-01

    The atomic structure of the Si(111)-(5×2)-Au surface, a periodic gold chain on the silicon surface, has been a long-debated issue in surface science. The recent three candidates, the so-called Erwin-Barke-Himpsel (EBH) model [S. C. Erwin, I. Barke, and F. J. Himpsel, Phys. Rev. B 80, 155409 (2009)], the Abukawa-Nishigaya (AN) model [T. Abukawa and Y. Nishigaya, Phys. Rev. Lett. 110, 036102 (2013)], and the Kwon-Kang (KK) model [S. G. Kwon and M. H. Kang, Phys. Rev. Lett. 113, 086101 (2014)] that has one additional Au atom than the EBH model are tested by surface x-ray diffraction data. A two-dimensional Patterson map constructed from the in-plane diffraction intensities rejects the AN model and prefers the KK model over the EBH model. On the basis of the arrangement of Au obtained from the Patterson map, all the reconstructed Si atoms, such as the so-called honeycomb chain structure, are directly imaged out by utilizing a holographic method. The KK model reproduces out-of-plane diffraction data as well.

  2. Optimal Ranking Regime Analysis of TreeFlow Dendrohydrological Reconstructions

    NASA Astrophysics Data System (ADS)

    Mauget, S. A.

    2017-12-01

    The Optimal Ranking Regime (ORR) method was used to identify 6-100 year time windows containing significant ranking sequences in 55 western U.S. streamflow reconstructions, and reconstructions of the level of the Great Salt Lake and San Francisco Bay salinity during 1500-2007. The method's ability to identify optimally significant and non-overlapping runs of low and high rankings allows it to re-express a reconstruction time series as a simplified sequence of regime segments marking intra- to multi-decadal (IMD) periods of low or high streamflow, lake level, or salinity. Those ORR sequences, referred to here as Z-lines, can be plotted to identify consistent regime patterns in the analysis of numerous reconstructions. The Z-lines for the 57 reconstructions evaluated here show a common pattern of IMD cycles of drought and pluvial periods during the late 16th and 17th centuries, a relatively dormant period during the 18th century, and the reappearance of alternating dry and wet IMD periods during the 19th and early 20th centuries. Although this pattern suggests the possibility of similarly active and inactive oceanic modes in the North Pacific and North Atlantic, such centennial-scale patterns are not evident in the ORR analyses of reconstructed Pacific Decadal Oscillation (PDO), El Niño-Southern Oscillation, and North Atlantic seas-surface temperature variation. But given the inconsistency in the analyses of four PDO reconstructions the possible role of centennial-scale oceanic mechanisms is uncertain. In future research the ORR method might be applied to climate reconstructions around the Pacific Basin to try to resolve this uncertainty. Given its ability to compare regime patterns in climate reconstructions derived using different methods and proxies, the method may also be used in future research to evaluate long-term regional temperature reconstructions.

  3. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  4. Detrimental Behaviours in Collaborative Tasks.

    ERIC Educational Resources Information Center

    Lim, Wai Lee; Jacobs, George M.

    Using a Vygotskian perspective, the researchers investigated the interaction of secondary school language learners engaged in a dictogloss task that called for collaborative reconstruction of a text. The investigation focused on the students' behaviors that were detrimental to effective interaction and made it less likely that students would be…

  5. Social Justice and Education as Discursive Initiation

    ERIC Educational Resources Information Center

    Stojanov, Krassimir

    2016-01-01

    In this essay Krassimir Stojanov attempts first to reconstruct the "heart" of Jürgen Habermas's discourse ethics, namely the so-called "principle of universalization" of ethical norms. This principle grounds Habermas's proceduralist account of social justice via equal access of all concerned to the practices of deliberative…

  6. 40 CFR 52.870 - Identification of plan.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 66219; at EPA Air and Radiation Docket and Information Center, EPA West Building, 1301 Constitution... (202) 566-1742. For information on the availability of this material at NARA, call (202) 741-6030, or... definition of the terms “building, structure, facility, or installation”; “installation”; and “reconstruction...

  7. An extended algebraic reconstruction technique (E-ART) for dual spectral CT.

    PubMed

    Zhao, Yunsong; Zhao, Xing; Zhang, Peng

    2015-03-01

    Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.

  8. Baryon Acoustic Oscillations reconstruction with pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele

    2017-09-01

    Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less

  9. GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.

    PubMed

    Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin

    2017-07-01

    Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  11. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  12. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  13. Forced knee extension test is a manual test that correlates with the unstable feelings of patients with ACL injury before and after reconstruction.

    PubMed

    Shirasawa, Shinichi; Koga, Hideyuki; Horie, Masafumi; Nakamura, Tomomasa; Watanabe, Toshifumi; Sekiya, Ichiro; Muneta, Takeshi

    2016-12-01

    To investigate fear in patients with anterior cruciate ligament (ACL) injury before and after reconstruction, a forced knee extension (FKE) test was performed. The correlation of the test results was evaluated with the subjective function, sports performance and objective parameters. The study included 102 patients with unilateral ACL reconstruction using a semitendinosus tendon with full clinical evaluation. This study was retrospective and determined the longitudinal results of the FKE test and investigated the effects on the subjective and objective outcomes at 2years. Preoperatively, 47% of patients showed positive FKE tests. The number of positive FKE tests was 31% at six months and 15% at 24months after ACL reconstruction. At two years, there were statistically significant differences between the FKE test positives and negatives regarding both subjective knee recovery (P=0.0095) and sports performance (P=0.0006). A new manual test, called the forced knee extension test, for fear in patients with ACL injury before and after reconstruction was introduced. The apprehension remained positive in 15% of the patients two years after ACL reconstruction, which affected subjective recovery of knee function and sports performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. A hierarchical Bayesian method for vibration-based time domain force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Li, Qiaofeng; Lu, Qiuhai

    2018-05-01

    Traditional force reconstruction techniques require prior knowledge on the force nature to determine the regularization term. When such information is unavailable, the inappropriate term is easily chosen and the reconstruction result becomes unsatisfactory. In this paper, we propose a novel method to automatically determine the appropriate q as in ℓq regularization and reconstruct the force history. The method incorporates all to-be-determined variables such as the force history, precision parameters and q into a hierarchical Bayesian formulation. The posterior distributions of variables are evaluated by a Metropolis-within-Gibbs sampler. The point estimates of variables and their uncertainties are given. Simulations of a cantilever beam and a space truss under various loading conditions validate the proposed method in providing adaptive determination of q and better reconstruction performance than existing Bayesian methods.

  15. Markov prior-based block-matching algorithm for superdimension reconstruction of porous media

    NASA Astrophysics Data System (ADS)

    Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong

    2018-04-01

    A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.

  16. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  17. Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.

    PubMed

    Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F

    2015-05-01

    Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method mean/SD pairs were (1.4%, 6.3%) and (-0.7%, 7.2%) for I44 strength 3 and I50 strength 3, respectively. Analysis of representative nodules confirmed that reader variability appeared unaffected by dose or reconstruction method. Lung-nodule volumetry was extremely robust to the radiation-dose level, down to the minimum scanner-supported dose settings. In addition, volumetry was robust to the reconstruction methods used in this study, which included both conventional filtered backprojection and iterative methods.

  18. Pancreaticoduodenectomy following gastrectomy reconstructed with Billroth II or Roux-en-Y method: Case series and literature review.

    PubMed

    Kawamoto, Yusuke; Ome, Yusuke; Kouda, Yusuke; Saga, Kennichi; Park, Taebum; Kawamoto, Kazuyuki

    2017-01-01

    The ideal reconstruction method for pancreaticoduodenectomy following a gastrectomy with Billroth II or Roux-en-Y reconstruction is unclear. We reviewed a series of seven pancreaticoduodenectomies performed after gastrectomy with the Billroth II or Roux-en-Y method. While preserving the existing gastrojejunostomy or esophagojejunostomy, pancreaticojejunostomy and hepaticojejunostomy were performed by the Roux-en-Y method using a new Roux limb in all cases. Four patients experienced postoperative complications, although the specific complications varied. A review of the literature revealed 13 cases of pancreaticoduodenectomy following gastrectomy with Billroth II or Roux-en-Y reconstruction. Three patients out of six (50%) in whom the past afferent limb was used for the reconstruction of the pancreaticojejunostomy and hepaticojejunostomy experienced afferent loop syndrome, while 14 previous and current patients in whom a new jejeunal limb was used did not experience this complication. The Roux-en-Y method, using the distal intestine of previous gastrojejunostomy or jejunojejunostomy as a new jejunal limb for pancreaticojejunostomy and hepaticojejunostomy, may be a better reconstruction method to avoid the complication of afferent loop syndrome after previous gastrectomy with Billroth II or Roux-en-Y reconstruction if the afferent limb is less than 40cm. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. White Matter Tract Segmentation as Multiple Linear Assignment Problems

    PubMed Central

    Sharmin, Nusrat; Olivetti, Emanuele; Avesani, Paolo

    2018-01-01

    Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method. PMID:29467600

  20. White Matter Tract Segmentation as Multiple Linear Assignment Problems.

    PubMed

    Sharmin, Nusrat; Olivetti, Emanuele; Avesani, Paolo

    2017-01-01

    Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method.

  1. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  2. Lattice Boltzmann simulation of the gas-solid adsorption process in reconstructed random porous media.

    PubMed

    Zhou, L; Qu, Z G; Ding, T; Miao, J Y

    2016-04-01

    The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.

  3. Lattice Boltzmann simulation of the gas-solid adsorption process in reconstructed random porous media

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.

    2016-04-01

    The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.

  4. Generation of dense statistical connectomes from sparse morphological data

    PubMed Central

    Egger, Robert; Dercksen, Vincent J.; Udvary, Daniel; Hege, Hans-Christian; Oberlaender, Marcel

    2014-01-01

    Sensory-evoked signal flow, at cellular and network levels, is primarily determined by the synaptic wiring of the underlying neuronal circuitry. Measurements of synaptic innervation, connection probabilities and subcellular organization of synaptic inputs are thus among the most active fields of research in contemporary neuroscience. Methods to measure these quantities range from electrophysiological recordings over reconstructions of dendrite-axon overlap at light-microscopic levels to dense circuit reconstructions of small volumes at electron-microscopic resolution. However, quantitative and complete measurements at subcellular resolution and mesoscopic scales to obtain all local and long-range synaptic in/outputs for any neuron within an entire brain region are beyond present methodological limits. Here, we present a novel concept, implemented within an interactive software environment called NeuroNet, which allows (i) integration of sparsely sampled (sub)cellular morphological data into an accurate anatomical reference frame of the brain region(s) of interest, (ii) up-scaling to generate an average dense model of the neuronal circuitry within the respective brain region(s) and (iii) statistical measurements of synaptic innervation between all neurons within the model. We illustrate our approach by generating a dense average model of the entire rat vibrissal cortex, providing the required anatomical data, and illustrate how to measure synaptic innervation statistically. Comparing our results with data from paired recordings in vitro and in vivo, as well as with reconstructions of synaptic contact sites at light- and electron-microscopic levels, we find that our in silico measurements are in line with previous results. PMID:25426033

  5. High-resolution reconstruction for terahertz imaging.

    PubMed

    Xu, Li-Min; Fan, Wen-Hui; Liu, Jia

    2014-11-20

    We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.

  6. Computed myography: three-dimensional reconstruction of motor functions from surface EMG data

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2008-12-01

    We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.

  7. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  8. Integrative Treatment of Personality Disorder. Part I: Psychotherapy.

    PubMed

    Jovanovic, Mirjana Divac; Svrakic, Dragan

    2017-03-01

    In this paper, we outline the concept of integrative therapy of borderline personality, also referred to as fragmented personality, which we consider to be the core psychopathology underlying all clinical subtypes of personality disorder. Hence, the terms borderline personality, borderline disorder, fragmented personality, and personality disorder are used interchangeably, as synonyms. Our integrative approach combines pharmacotherapy and psychotherapy, each specifically tailored to accomplish a positive feedback modulation of their respective effects. We argue that pharmacotherapy and psychotherapy of personality disorder complement each other. Pharmacological control of disruptive affects clears the stage, in some cases builds the stage, for the psychotherapeutic process to take place. In turn, psychotherapy promotes integration of personality fragments into more cohesive structures of self and identity, ultimately establishing self-regulation of mood and anxiety. We introduce our original method of psychotherapy, called reconstructive interpersonal therapy (RIT). The RIT integrates humanistic-existential and psychodynamic paradigms, and is thereby designed to accomplish a deep reconstruction of core psychopathology within the setting of high structure. We review and comment the current literature on the strategies, goals, therapy process, priorities, and phases of psychotherapy of borderline disorders, and describe in detail the fundamental principles of RIT.

  9. In-situ observations of flux ropes formed in association with a pair of spiral nulls in magnetotail plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Ruilong; Xie, Lun; He, Jiansen

    Signatures of secondary islands are frequently observed in the magnetic reconnection regions of magnetotail plasmas. In this paper, magnetic structures with the secondary-island signatures observed by Cluster are reassembled by a fitting-reconstruction method. The results show three-dimensionally that a secondary island event can manifest the flux rope formed with an A{sub s}-type null and a B{sub s}-type null paired via their spines. We call this A{sub s}-spine-B{sub s}-like configuration the helically wrapped spine model. The reconstructed field lines wrap around the spine to form the flux rope, and an O-type topology is therefore seen on the plane perpendicular to themore » spine. Magnetized electrons are found to rotate on and cross the fan surface, suggesting that both the torsional-spine and the spine-fan reconnection take place in the configuration. Furthermore, detailed analysis implies that the spiral nulls and flux ropes were locally generated nearby the spacecraft in the reconnection outflow region, indicating that secondary reconnection may occur in the exhaust away from the primary reconnection site.« less

  10. X-PROP: a fast and robust diffusion-weighted propeller technique.

    PubMed

    Li, Zhiqiang; Pipe, James G; Lee, Chu-Yu; Debbins, Josef P; Karis, John P; Huo, Donglai

    2011-08-01

    Diffusion-weighted imaging (DWI) has shown great benefits in clinical MR exams. However, current DWI techniques have shortcomings of sensitivity to distortion or long scan times or combinations of the two. Diffusion-weighted echo-planar imaging (EPI) is fast but suffers from severe geometric distortion. Periodically rotated overlapping parallel lines with enhanced reconstruction diffusion-weighted imaging (PROPELLER DWI) is free of geometric distortion, but the scan time is usually long and imposes high Specific Absorption Rate (SAR) especially at high fields. TurboPROP was proposed to accelerate the scan by combining signal from gradient echoes, but the off-resonance artifacts from gradient echoes can still degrade the image quality. In this study, a new method called X-PROP is presented. Similar to TurboPROP, it uses gradient echoes to reduce the scan time. By separating the gradient and spin echoes into individual blades and removing the off-resonance phase, the off-resonance artifacts in X-PROP are minimized. Special reconstruction processes are applied on these blades to correct for the motion artifacts. In vivo results show its advantages over EPI, PROPELLER DWI, and TurboPROP techniques. Copyright © 2011 Wiley-Liss, Inc.

  11. Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives.

    PubMed

    Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri

    2016-02-01

    The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  13. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  14. Training Community Modeling and Simulation Business Plan, 2007 Edition. Volume 2: Data Call Responses and Analysis

    DTIC Science & Technology

    2009-02-01

    services; and • Other reconstruction assistance. D-14 17. Train Forces on Military Assistance to Civil Authorities ( MACA ) Develop environments...for training in the planning and execution of MACA in support of disaster relief (natural and man-made), military assistance for civil disturbances

  15. Re-Imagining the Land, North Sutherland, Scotland

    ERIC Educational Resources Information Center

    Mackenzie, A. F. D.

    2004-01-01

    This paper focuses on contemporary re-imaginings of the land in North Sutherland that counter global, modernist discourse. One narrative concerns the reinvention of the past; the other concerns the reconstruction of the present. Through both, people create what Edward Said (Culture and Imperialism. Alfred A. Knopf, New York, 1994) calls a…

  16. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  17. Healing and Building Soil on Prairie Birthday Farm

    USDA-ARS?s Scientific Manuscript database

    Native tallgrass prairie was restored as an integral part of a small, food-producing farm called Prairie Birthday Farm in Clay County, Missouri. Reconstruction of native prairie was essential to the farm’s goal of producing high quality food for the family, area residents and restaurant chefs. Impro...

  18. School District Triggers for Reconstructing Professional Knowledge

    ERIC Educational Resources Information Center

    Hannay, Lynne M.; Earl, Lorna

    2012-01-01

    In a recent publication, Senge ("All systems go: the change imperative for whole system reform." Corwin Press, Thousand Oaks, 2010, x) stated "at no time in history has there been a more powerful need for a new vision of the purpose of education." Increasingly citizens, academics and practitioners are calling for radical…

  19. Languages of Domination and Rebellion in Highland Peru.

    ERIC Educational Resources Information Center

    Isbell, Billie Jean

    1985-01-01

    Andean society is polarized into two key segments: the national Spanish-speaking culture and the native Quechua-speaking culture. Each group ignores or reconstructs history in order to perpetuate its deep-seated prejudices. A group called "The Shining Path" is trying to transform both cultures through violent revolution. (RM)

  20. 40 CFR 52.870 - Identification of plan.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 66101; at the EPA, Air and Radiation Docket and Information Center, Room Number 3334, EPA West Building... Docket at (202) 566-1742. For information on the availability of this material at NARA, call (202) 741... definition of the terms “building, structure, facility, or installation”; “installation”; and “reconstruction...

  1. Composition: What's Love Got To Do with It?

    ERIC Educational Resources Information Center

    Ballif, Michelle

    A recent trend in composition studies has been a call for the "feminization" of composition pedagogy. Collaborative learning pedagogues have sought to reconstruct the classroom as a site of social cooperation, connectedness, and nurturance and have re-envisioned composition as an act of understanding rather than of agonistics.…

  2. Simulations in the Analysis of Experimental Data Measured by BM@N Drift Chambers

    NASA Astrophysics Data System (ADS)

    Fedorišin, Ján

    2018-02-01

    The drift chambers (DCH's) are an important part of the tracking system of the BM@N experiment designed to study the production of baryonic matter at the Nuclotron energies. The method of particle hit and track reconstruction in the drift chambers has been already proposed and tested on the BM@N deuteron beam data. In this study the DCH's are first locally and globally aligned, and subsequently the consistency of the track reconstruction chain is tested by two methods. The first one is based on the backward extrapolation of the DCH reconstructed deuteron beam to a position where its deflection in the BM@N magnetic field begins. The second method reconstructs the deuteron beam momentum through its deflection angle. Both methods confirm correctness of the track reconstruction algorithm.

  3. Car-to-pedestrian collision reconstruction with injury as an evaluation index.

    PubMed

    Weng, Yiliu; Jin, Xianlong; Zhao, Zhijie; Zhang, Xiaoyun

    2010-07-01

    Reconstruction of accidents is currently considered as a useful means in the analysis of accidents. By multi-body dynamics and numerical methods, and by adopting vehicle and pedestrian models, the scenario of the crash can often be simulated. When reconstructing the collisions, questions often arise regarding the criteria for the evaluation of simulation results. This paper proposes a reconstruction method for car-to-pedestrian collisions based on injuries of the pedestrians. In this method, pedestrian injury becomes a critical index in judging the correctness of the reconstruction result and guiding the simulation process. Application of this method to a real accident case is also presented in this paper. The study showed a good agreement between injuries obtained by numerical simulation and that by forensic identification. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  5. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  6. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  7. Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks at ATLAS

    NASA Astrophysics Data System (ADS)

    Cheng, Yangyang

    This thesis presents a search for dark matter production in association with a Higgs boson decaying to a pair of bottom quarks, using data from 20.3 fb-1 of proton-proton collisions at a center-of-mass energy of 8 TeV collected by the ATLAS detector at the LHC. The dark matter particles are assumed to be Weakly Interacting Massive Particles, and can be produced in pairs at collider experiments. Events with large missing transverse energy are selected when produced in association with high momentum jets, of which at least two are identified as jets containing b-quarks consistent with those from a Higgs boson decay. To maintain good detector acceptance and selection efficiency of the signal across a wide kinematic range, two methods of Higgs boson reconstruction are used. The Higgs boson is reconstructed either as a pair of small-radius jets both containing b-quarks, called the "resolved'' analysis, or as a single large-radius jet with substructure consistent with a high momentum b b system, called the "boosted'' analysis. The resolved analysis is the focus of this thesis. The observed data are found to be consistent with the expected Standard Model backgrounds. The result from the resolved analysis is interpreted using a simplified model with a Z' gauge boson decaying into different Higgs bosons predicted in a two-Higgs-doublet model, of which the heavy pseudoscalar Higgs decays into a pair of dark matter particles. Exclusion limits are set in regions of parameter space for this model. Model-independent upper limits are also placed on the visible cross-sections for events with a Higgs boson decaying into bb and large missing transverse momentum with thresholds ranging from 150 GeV to 400 GeV.

  8. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    PubMed

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have developed a fast and robust surface reconstruction method on point clouds captured from a 3D surface photogrammetry system, with demonstrated sub-millimeter reconstruction accuracy and subsecond reconstruction time. It is suitable for real-time motion tracking in radiotherapy, with clear surface structures for better quantifications.

  9. TU-E-217BCD-09: The Feasibility of the Dual-Dictionary Method for Breast Computed Tomography Based on Photon-Counting Detectors.

    PubMed

    Zhao, B; Ding, H; Lu, Y; Wang, G; Zhao, J; Molloi, S

    2012-06-01

    To investigate the feasibility of an Iterative Reconstruction (IR) method utilizing the algebraic reconstruction technique coupled with dual-dictionary learning for the application of dedicated breast computed tomography (CT) based on a photon-counting detector. Postmortem breast samples were scanned in an experimental fan beam CT system based on a Cadmium-Zinc-Telluride (CZT) photon-counting detector. Images were reconstructed from various numbers of projections with both IR and Filtered-Back-Projection (FBP) methods. Contrast-to-Noise Ratio (CNR) between the glandular and adipose tissue of postmortem breast samples were calculated to evaluate the quality of images reconstructed from IR and FBP. In addition to CNR, the spatial resolution was also used as a metric to evaluate the quality of images reconstructed from the two methods. This is further studied with a high-resolution phantom consisting of a 14 cm diameter, 10 cm length polymethylmethacrylate (PMMA) cylinder. A 5 cm diameter coaxial volume of Interest insert that contains fine Aluminum wires of various diameters was used to determine spatial resolution. The spatial resolution and CNR were better when identical sinograms were reconstructed in IR as compared to FBP. In comparison with FBP reconstruction, a similar CNR was achieved using IR method with up to a factor of 5 fewer projections. The results of this study suggest that IR method can significantly reduce the required number of projections for a CT reconstruction compared to FBP method to achieve an equivalent CNR. Therefore, the scanning time of a CZT-based CT system using the IR method can potentially be reduced. © 2012 American Association of Physicists in Medicine.

  10. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman

    2017-02-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.

  11. Evaluation of algorithms for point cloud surface reconstruction through the analysis of shape parameters

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Verbeek, Fons J.

    2012-03-01

    In computer graphics and visualization, reconstruction of a 3D surface from a point cloud is an important research area. As the surface contains information that can be measured, i.e. expressed in features, the application of surface reconstruction can be potentially important for application in bio-imaging. Opportunities in this application area are the motivation for this study. In the past decade, a number of algorithms for surface reconstruction have been proposed. Generally speaking, these methods can be separated into two categories: i.e., explicit representation and implicit approximation. Most of the aforementioned methods are firmly based in theory; however, so far, no analytical evaluation between these methods has been presented. The straightforward way of evaluation has been by convincing through visual inspection. Through evaluation we search for a method that can precisely preserve the surface characteristics and that is robust in the presence of noise. The outcome will be used to improve reliability in surface reconstruction of biological models. We, therefore, use an analytical approach by selecting features as surface descriptors and measure these features in varying conditions. We selected surface distance, surface area and surface curvature as three major features to compare quality of the surface created by the different algorithms. Our starting point has been ground truth values obtained from analytical shapes such as the sphere and the ellipsoid. In this paper we present four classical surface reconstruction methods from the two categories mentioned above, i.e. the Power Crust, the Robust Cocone, the Fourier-based method and the Poisson reconstruction method. The results obtained from our experiments indicate that Poisson reconstruction method performs the best in the presence of noise.

  12. Virtual reconstruction of glenoid bone defects using a statistical shape model.

    PubMed

    Plessers, Katrien; Vanden Berghe, Peter; Van Dijck, Christophe; Wirix-Speetjens, Roel; Debeer, Philippe; Jonkers, Ilse; Vander Sloten, Jos

    2018-01-01

    Description of the native shape of a glenoid helps surgeons to preoperatively plan the position of a shoulder implant. A statistical shape model (SSM) can be used to virtually reconstruct a glenoid bone defect and to predict the inclination, version, and center position of the native glenoid. An SSM-based reconstruction method has already been developed for acetabular bone reconstruction. The goal of this study was to evaluate the SSM-based method for the reconstruction of glenoid bone defects and the prediction of native anatomic parameters. First, an SSM was created on the basis of 66 healthy scapulae. Then, artificial bone defects were created in all scapulae and reconstructed using the SSM-based reconstruction method. For each bone defect, the reconstructed surface was compared with the original surface. Furthermore, the inclination, version, and glenoid center point of the reconstructed surface were compared with the original parameters of each scapula. For small glenoid bone defects, the healthy surface of the glenoid was reconstructed with a root mean square error of 1.2 ± 0.4 mm. Inclination, version, and glenoid center point were predicted with an accuracy of 2.4° ± 2.1°, 2.9° ± 2.2°, and 1.8 ± 0.8 mm, respectively. The SSM-based reconstruction method is able to accurately reconstruct the native glenoid surface and to predict the native anatomic parameters. Based on this outcome, statistical shape modeling can be considered a successful technique for use in the preoperative planning of shoulder arthroplasty. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  13. Review of digital holography reconstruction methods

    NASA Astrophysics Data System (ADS)

    Dovhaliuk, Rostyslav Yu.

    2018-01-01

    Development of digital holography opened new ways of both transparent and opaque objects non-destructive study. In this paper, a digital hologram reconstruction process is investigated. The advantages and limitations of common wave propagation methods are discussed. The details of a software implementation of a digital hologram reconstruction methods are presented. Finally, the performance of each wave propagation method is evaluated, and recommendations about possible use cases for each of them are given.

  14. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  15. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  16. Validation of a fibula graft cutting guide for mandibular reconstruction: experiment with rapid prototyping mandible model.

    PubMed

    Lim, Se-Ho; Kim, Yeon-Ho; Kim, Moon-Key; Nam, Woong; Kang, Sang-Hoon

    2016-12-01

    We examined whether cutting a fibula graft with a surgical guide template, prepared with computer-aided design/computer-aided manufacturing (CAD/CAM), would improve the precision and accuracy of mandibular reconstruction. Thirty mandibular rapid prototype (RP) models were allocated to experimental (N = 15) and control (N = 15) groups. Thirty identical fibular RP models were assigned randomly, 15 to each group. For reference, we prepared a reconstructed mandibular RP model with a three-dimensional printer, based on surgical simulation. In the experimental group, a stereolithography (STL) surgical guide template, based on simulation, was used for cutting the fibula graft. In the control group, the fibula graft was cut manually, with reference to the reconstructed RP mandible model. The mandibular reconstructions were compared to the surgical simulation, and errors were calculated for both the STL surgical guide and the manual methods. The average differences in three-dimensional, minimum distances between the reconstruction and simulation were 9.87 ± 6.32 mm (mean ± SD) for the STL surgical guide method and 14.76 ± 10.34 mm (mean ± SD) for the manual method. The STL surgical guide method incurred less error than the manual method in mandibular reconstruction. A fibula cutting guide improved the precision of reconstructing the mandible with a fibula graft.

  17. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  18. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  19. Metal artifact reduction using a patch-based reconstruction for digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Digital breast tomosynthesis (DBT) is rapidly emerging as the main clinical tool for breast cancer screening. Although several reconstruction methods for DBT are described by the literature, one common issue is the interplane artifacts caused by out-of-focus features. For breasts containing highly attenuating features, such as surgical clips and large calcifications, the artifacts are even more apparent and can limit the detection and characterization of lesions by the radiologist. In this work, we propose a novel method of combining backprojected data into tomographic slices using a patch-based approach, commonly used in denoising. Preliminary tests were performed on a geometry phantom and on an anthropomorphic phantom containing metal inserts. The reconstructed images were compared to a commercial reconstruction solution. Qualitative assessment of the reconstructed images provides evidence that the proposed method reduces artifacts while maintaining low noise levels. Objective assessment supports the visual findings. The artifact spread function shows that the proposed method is capable of suppressing artifacts generated by highly attenuating features. The signal difference to noise ratio shows that the noise levels of the proposed and commercial methods are comparable, even though the commercial method applies post-processing filtering steps, which were not implemented on the proposed method. Thus, the proposed method can produce tomosynthesis reconstructions with reduced artifacts and low noise levels.

  20. Tomography for two-dimensional gas temperature distribution based on TDLAS

    NASA Astrophysics Data System (ADS)

    Luo, Can; Wang, Yunchu; Xing, Fei

    2018-03-01

    Based on tunable diode laser absorption spectroscopy (TDLAS), the tomography is used to reconstruct the combustion gas temperature distribution. The effects of number of rays, number of grids, and spacing of rays on the temperature reconstruction results for parallel ray are researched. The reconstruction quality is proportional to the ray number. The quality tends to be smoother when the ray number exceeds a certain value. The best quality is achieved when η is between 0.5 and 1. A virtual ray method combined with the reconstruction algorithms is tested. It is found that virtual ray method is effective to improve the accuracy of reconstruction results, compared with the original method. The linear interpolation method and cubic spline interpolation method, are used to improve the calculation accuracy of virtual ray absorption value. According to the calculation results, cubic spline interpolation is better. Moreover, the temperature distribution of a TBCC combustion chamber is used to validate those conclusions.

  1. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  2. Reconstructing El Niño Southern Oscillation using data from ships' logbooks, 1815-1854. Part I: methodology and evaluation

    NASA Astrophysics Data System (ADS)

    Barrett, Hannah G.; Jones, Julie M.; Bigg, Grant R.

    2018-02-01

    The meteorological information found within ships' logbooks is a unique and fascinating source of data for historical climatology. This study uses wind observations from logbooks covering the period 1815 to 1854 to reconstruct an index of El Niño Southern Oscillation (ENSO) for boreal winter (DJF). Statistically-based reconstructions of the Southern Oscillation Index (SOI) are obtained using two methods: principal component regression (PCR) and composite-plus-scale (CPS). Calibration and validation are carried out over the modern period 1979-2014, assessing the relationship between re-gridded seasonal ERA-Interim reanalysis wind data and the instrumental SOI. The reconstruction skill of both the PCR and CPS methods is found to be high with reduction of error skill scores of 0.80 and 0.75, respectively. The relationships derived during the fitting period are then applied to the logbook wind data to reconstruct the historical SOI. We develop a new method to assess the sensitivity of the reconstructions to using a limited number of observations per season and find that the CPS method performs better than PCR with a limited number of observations. A difference in the distribution of wind force terms used by British and Dutch ships is found, and its impact on the reconstruction assessed. The logbook reconstructions agree well with a previous SOI reconstructed from Jakarta rain day counts, 1830-1850, adding robustness to our reconstructions. Comparisons to additional documentary and proxy data sources are provided in a companion paper.

  3. Strategy on energy saving reconstruction of distribution networks based on life cycle cost

    NASA Astrophysics Data System (ADS)

    Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng

    2017-08-01

    Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.

  4. Improving reflectance reconstruction from tristimulus values by adaptively combining colorimetric and reflectance similarities

    NASA Astrophysics Data System (ADS)

    Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo

    2017-05-01

    The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.

  5. Development of a strain rate dependent material model of human cortical bone for computer-aided reconstruction of injury mechanisms.

    PubMed

    Asgharpour, Zahra; Zioupos, Peter; Graw, Matthias; Peldschus, Steffen

    2014-03-01

    Computer-aided methods such as finite-element simulation offer a great potential in the forensic reconstruction of injury mechanisms. Numerous studies have been performed on understanding and analysing the mechanical properties of bone and the mechanism of its fracture. Determination of the mechanical properties of bones is made on the same basis used for other structural materials. The mechanical behaviour of bones is affected by the mechanical properties of the bone material, the geometry, the loading direction and mode and of course the loading rate. Strain rate dependency of mechanical properties of cortical bone has been well demonstrated in literature studies, but as many of these were performed on animal bones and at non-physiological strain rates it is questionable how these will apply in the human situations. High strain-rates dominate in a lot of forensic applications in automotive crashes and assault scenarios. There is an overwhelming need to a model which can describe the complex behaviour of bone at lower strain rates as well as higher ones. Some attempts have been made to model the viscoelastic and viscoplastic properties of the bone at high strain rates using constitutive mathematical models with little demonstrated success. The main objective of the present study is to model the rate dependent behaviour of the bones based on experimental data. An isotropic material model of human cortical bone with strain rate dependency effects is implemented using the LS-DYNA material library. We employed a human finite element model called THUMS (Total Human Model for Safety), developed by Toyota R&D Labs and the Wayne State University, USA. The finite element model of the human femur is extracted from the THUMS model. Different methods have been employed to develop a strain rate dependent material model for the femur bone. Results of one the recent experimental studies on human femur have been employed to obtain the numerical model for cortical femur. A forensic application of the model is explained in which impacts to the arm have been reconstructed using the finite element model of THUMS. The advantage of the numerical method is that a wide range of impact conditions can be easily reconstructed. Impact velocity has been changed as a parameter to find the tolerance levels of injuries to the lower arm. The method can be further developed to study the assaults and the injury mechanism which can lead to severe traumatic injuries in forensic cases. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Current strategies with 1-stage prosthetic breast reconstruction

    PubMed Central

    2015-01-01

    Background 1-stage prosthetic breast reconstruction is gaining traction as a preferred method of breast reconstruction in select patients who undergo mastectomy for cancer or prevention. Methods Critical elements to the procedure including patient selection, technique, surgical judgment, and postoperative care were reviewed. Results Outcomes series reveal that in properly selected patients, direct-to-implant (DTI) reconstruction has similar low rates of complications and high rates of patient satisfaction compared to traditional 2-stage reconstruction. Conclusions 1-stage prosthetic breast reconstruction may be the procedure of choice in select patients undergoing mastectomy. Advantages include the potential for the entire reconstructive process to be complete in one surgery, the quick return to normal activities, and lack of donor site morbidity. PMID:26005643

  7. A new method for reconstruction of solar irradiance

    NASA Astrophysics Data System (ADS)

    Privalsky, Victor

    2018-07-01

    The purpose of this research is to show how time series should be reconstructed using an example with the data on total solar irradiation (TSI) of the Earth and on sunspot numbers (SSN) since 1749. The traditional approach through regression equation(s) is designed for time-invariant vectors of random variables and is not applicable to time series, which present random functions of time. The autoregressive reconstruction (ARR) method suggested here requires fitting a multivariate stochastic difference equation to the target/proxy time series. The reconstruction is done through the scalar equation for the target time series with the white noise term excluded. The time series approach is shown to provide a better reconstruction of TSI than the correlation/regression method. A reconstruction criterion is introduced which allows one to define in advance the achievable level of success in the reconstruction. The conclusion is that time series, including the total solar irradiance, cannot be reconstructed properly if the data are not treated as sample records of random processes and analyzed in both time and frequency domains.

  8. TH-EF-207A-05: Feasibility of Applying SMEIR Method On Small Animal 4D Cone Beam CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Y; Zhang, Y; Shao, Y

    Purpose: Small animal cone beam CT imaging has been widely used in preclinical research. Due to the higher respiratory rate and heat beats of small animals, motion blurring is inevitable and needs to be corrected in the reconstruction. Simultaneous motion estimation and image reconstruction (SMEIR) method, which uses projection images of all phases, proved to be effective in motion model estimation and able to reconstruct motion-compensated images. We demonstrate the application of SMEIR for small animal 4D cone beam CT imaging by computer simulations on a digital rat model. Methods: The small animal CBCT imaging system was simulated with themore » source-to-detector distance of 300 mm and the source-to-object distance of 200 mm. A sequence of rat phantom were generated with 0.4 mm{sup 3} voxel size. The respiratory cycle was taken as 1.0 second and the motions were simulated with a diaphragm motion of 2.4mm and an anterior-posterior expansion of 1.6 mm. The projection images were calculated using a ray-tracing method, and 4D-CBCT were reconstructed using SMEIR and FDK methods. The SMEIR method iterates over two alternating steps: 1) motion-compensated iterative image reconstruction by using projections from all respiration phases and 2) motion model estimation from projections directly through a 2D-3D deformable registration of the image obtained in the first step to projection images of other phases. Results: The images reconstructed using SMEIR method reproduced the features in the original phantom. Projections from the same phase were also reconstructed using FDK method. Compared with the FDK results, the images from SMEIR method substantially improve the image quality with minimum artifacts. Conclusion: We demonstrate that it is viable to apply SMEIR method to reconstruct small animal 4D-CBCT images.« less

  9. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Hoffman, J; McNitt-Gray, M

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  10. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. An External Wire Frame Fixation Method of Skin Grafting for Burn Reconstruction.

    PubMed

    Yoshino, Yukiko; Ueda, Hyakuzoh; Ono, Simpei; Ogawa, Rei

    2017-06-28

    The skin graft is a prevalent reconstructive method for burn injuries. We have been applying external wire frame fixation methods in combination with skin grafts since 1986 and have experienced better outcomes in percentage of successful graft take. The overall purpose of this method was to further secure skin graft adherence to wound beds in hard to stabilize areas. There are also location-specific benefits to this technique such as eliminating the need of tarsorrhaphy in periorbital area, allowing immediate food intake after surgery in perioral area, and performing less invasive fixing methods in digits, and so on. The purpose of this study was to clarify its benefits and applicable locations. We reviewed 22 postburn patients with skin graft reconstructions using the external wire frame method at our institution from December 2012 through September 2016. Details of the surgical technique and individual reports are also discussed. Of the 22 cases, 15 (68%) were split-thickness skin grafts and 7 (32%) were full-thickness skin grafts. Five cases (23%) involved periorbital reconstruction, 5 (23%) involved perioral reconstruction, 2 (9%) involved lower limb reconstruction, and 10 (45%) involved digital reconstruction. Complete (100%) survival of the skin graft was attained in all cases. No signs of complication were observed. With 30 years of experiences all combined, we have summarized fail-proof recommendations to a successful graft survival with an emphasis on the locations of its application.

  12. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  13. Direct reconstruction of pharmacokinetic parameters in dynamic fluorescence molecular tomography by the augmented Lagrangian method

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing

    2016-03-01

    Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.

  14. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  15. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  16. Segmentation-free empirical beam hardening correction for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Sören; Sawall, Stefan; Stannigel, Kai

    2015-02-15

    Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one ismore » a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed algorithm to be segmentation-free (sf). This deformation leads to a nonlinear accentuation of higher CT-values. The original volume and the gray value deformed volume are monochromatically forward projected. The two projection sets are then monomially combined and reconstructed to generate sets of basis volumes which are used for correction. This is done by maximization of the image flatness due to adding additionally a weighted sum of these basis images. sfEBHC is evaluated on polychromatic simulations, phantom measurements, and patient data. The raw data sets were acquired by a dual source spiral CT scanner, a digital volume tomograph, and a dual source micro CT. Different phantom and patient data were used to illustrate the performance and wide range of usability of sfEBHC across different scanning scenarios. The artifact correction capabilities are compared to EBHC. Results: All investigated cases show equal or improved image quality compared to the standard EBHC approach. The artifact correction is capable of correcting beam hardening artifacts for different scan parameters and scan scenarios. Conclusions: sfEBHC generates beam hardening-reduced images and is furthermore capable of dealing with images which are affected by high noise and strong artifacts. The algorithm can be used to recover structures which are hardly visible inside the beam hardening-affected regions.« less

  17. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  18. Reconstructing the sky location of gravitational-wave detected compact binary systems: Methodology for testing and comparison

    NASA Astrophysics Data System (ADS)

    Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.

    2014-04-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.

  19. WE-AB-207A-04: Random Undersampled Cone Beam CT: Theoretical Analysis and a Novel Reconstruction Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, C; Chen, L; Jia, X

    2016-06-15

    Purpose: Reducing x-ray exposure and speeding up data acquisition motived studies on projection data undersampling. It is an important question that for a given undersampling ratio, what the optimal undersampling approach is. In this study, we propose a new undersampling scheme: random-ray undersampling. We will mathematically analyze its projection matrix properties and demonstrate its advantages. We will also propose a new reconstruction method that simultaneously performs CT image reconstruction and projection domain data restoration. Methods: By representing projection operator under the basis of singular vectors of full projection operator, matrix representations for an undersampling case can be generated and numericalmore » singular value decomposition can be performed. We compared properties of matrices among three undersampling approaches: regular-view undersampling, regular-ray undersampling, and the proposed random-ray undersampling. To accomplish CT reconstruction for random undersampling, we developed a novel method that iteratively performs CT reconstruction and missing projection data restoration via regularization approaches. Results: For a given undersampling ratio, random-ray undersampling preserved mathematical properties of full projection operator better than the other two approaches. This translates to advantages of reconstructing CT images at lower errors. Different types of image artifacts were observed depending on undersampling strategies, which were ascribed to the unique singular vectors of the sampling operators in the image domain. We tested the proposed reconstruction algorithm on a Forbid phantom with only 30% of the projection data randomly acquired. Reconstructed image error was reduced from 9.4% in a TV method to 7.6% in the proposed method. Conclusion: The proposed random-ray undersampling is mathematically advantageous over other typical undersampling approaches. It may permit better image reconstruction at the same undersampling ratio. The novel algorithm suitable for this random-ray undersampling was able to reconstruct high-quality images.« less

  20. Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption

    PubMed Central

    Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-01-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227

  1. OPERA, an automatic PSF reconstruction software for Shack-Hartmann AO systems: application to Altair

    NASA Astrophysics Data System (ADS)

    Jolissaint, Laurent; Veran, Jean-Pierre; Marino, Jose

    2004-10-01

    When doing high angular resolution imaging with adaptive optics (AO), it is of crucial importance to have an accurate knowledge of the point spread function associated with each observation. Applications are numerous: image contrast enhancement by deconvolution, improved photometry and astrometry, as well as real time AO performance evaluation. In this paper, we present our work on automatic PSF reconstruction based on control loop data, acquired simultaneously with the observation. This problem has already been solved for curvature AO systems. To adapt this method to another type of WFS, a specific analytical noise propagation model must be established. For the Shack-Hartmann WFS, we are able to derive a very accurate estimate of the noise on each slope measurement, based on the covariances of the WFS CCD pixel values in the corresponding sub-aperture. These covariances can be either derived off-line from telemetry data, or calculated by the AO computer during the acquisition. We present improved methods to determine 1) r0 from the DM drive commands, which includes an estimation of the outer scale L0 2) the contribution of the high spatial frequency component of the turbulent phase, which is not corrected by the AO system and is scaled by r0. This new method has been implemented in an IDL-based software called OPERA (Performance of Adaptive Optics). We have tested OPERA on Altair, the recently commissioned Gemini-North AO system, and present our preliminary results. We also summarize the AO data required to run OPERA on any other AO system.

  2. Estimating the timing of quantal releases during end-plate currents at the frog neuromuscular junction.

    PubMed Central

    Van der Kloot, W

    1988-01-01

    1. Following motor nerve stimulation there is a period of greatly enhanced quantal release, called the early release period or ERP (Barrett & Stevens, 1972b). Until now, measurements of the probability of quantal releases at different points in the ERP have come from experiments in which quantal output was greatly reduced, so that the time of release of individual quanta could be detected or so that the latency to the release of the first quantum could be measured. 2. A method has been developed to estimate the timing of quantal release during the ERP that can be used at much higher levels of quantal output. The assumption is made that each quantal release generates an end-plate current (EPC) that rises instantaneously and then decays exponentially. The peak amplitude of the quantal currents and the time constant for their decay are measured from miniature end-plate currents (MEPCs). Then a number of EPCs are averaged, and the times of release of the individual quanta during the ERP estimated by a simple mathematical method for deconvolution derived by Cohen, Van der Kloot & Attwell (1981). 3. The deconvolution method was tested using data from preparations in high-Mg2+ low-Ca2+ solution. One test was to reconstitute the averaged EPCs from the estimated times of quantal release and the quantal currents, by using Fourier convolution. The reconstructions fit well to the originals. 4. Reconstructions were also made from averaged MEPCs which do not rise instantaneously and the estimated times of quantal release.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2466987

  3. NetMiner-an ensemble pipeline for building genome-wide and high-quality gene co-expression network using massive-scale RNA-seq samples.

    PubMed

    Yu, Hua; Jiao, Bingke; Lu, Lu; Wang, Pengfei; Chen, Shuangcheng; Liang, Chengzhi; Liu, Wei

    2018-01-01

    Accurately reconstructing gene co-expression network is of great importance for uncovering the genetic architecture underlying complex and various phenotypes. The recent availability of high-throughput RNA-seq sequencing has made genome-wide detecting and quantifying of the novel, rare and low-abundance transcripts practical. However, its potential merits in reconstructing gene co-expression network have still not been well explored. Using massive-scale RNA-seq samples, we have designed an ensemble pipeline, called NetMiner, for building genome-scale and high-quality Gene Co-expression Network (GCN) by integrating three frequently used inference algorithms. We constructed a RNA-seq-based GCN in one species of monocot rice. The quality of network obtained by our method was verified and evaluated by the curated gene functional association data sets, which obviously outperformed each single method. In addition, the powerful capability of network for associating genes with functions and agronomic traits was shown by enrichment analysis and case studies. In particular, we demonstrated the potential value of our proposed method to predict the biological roles of unknown protein-coding genes, long non-coding RNA (lncRNA) genes and circular RNA (circRNA) genes. Our results provided a valuable and highly reliable data source to select key candidate genes for subsequent experimental validation. To facilitate identification of novel genes regulating important biological processes and phenotypes in other plants or animals, we have published the source code of NetMiner, making it freely available at https://github.com/czllab/NetMiner.

  4. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  5. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  6. Evaluation of a Fully 3-D Bpf Method for Small Animal PET Images on Mimd Architectures

    NASA Astrophysics Data System (ADS)

    Bevilacqua, A.

    Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in order to improve the sensitivity of the PET system: by including axially oblique Lines Of Response (LORs), the sensitivity of the system can be significantly improved by the 3-D acquisition and reconstruction. The BPF method is widely used in clinical studies because of its speed and easy implementation. Moreover, the BPF method is suitable for on-time 3-D reconstruction as it does not need any sinogram or rearranged data. In order to investigate the possibility of on-line processing, we reconstruct a phantom using the data stored in the list-mode format by the data acquisition system. We show how the intrinsically parallel nature of the BPF method makes it suitable for on-line reconstruction on a MIMD system such as the Cray T3E. Lastly, we analyze the performance of this algorithm on a cluster of workstations.

  7. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  8. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  9. Microsurgery within reconstructive surgery of extremities.

    PubMed

    Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T

    2006-05-01

    Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.

  10. Tomographic reconstruction of tracer gas concentration profiles in a room with the use of a single OP-FTIR and two iterative algorithms: ART and PWLS.

    PubMed

    Park, D Y; Fessler, J A; Yost, M G; Levine, S P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 x 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  11. Tomographic Reconstruction of Tracer Gas Concentration Profiles in a Room with the Use of a Single OP-FTIR and Two Iterative Algorithms: ART and PWLS.

    PubMed

    Park, Doo Y; Fessier, Jeffrey A; Yost, Michael G; Levine, Steven P

    2000-03-01

    Computed tomographic (CT) reconstructions of air contaminant concentration fields were conducted in a room-sized chamber employing a single open-path Fourier transform infrared (OP-FTIR) instrument and a combination of 52 flat mirrors and 4 retroreflectors. A total of 56 beam path data were repeatedly collected for around 1 hr while maintaining a stable concentration gradient. The plane of the room was divided into 195 pixels (13 × 15) for reconstruction. The algebraic reconstruction technique (ART) failed to reconstruct the original concentration gradient patterns for most cases. These poor results were caused by the "highly underdetermined condition" in which the number of unknown values (156 pixels) exceeds that of known data (56 path integral concentrations) in the experimental setting. A new CT algorithm, called the penalized weighted least-squares (PWLS), was applied to remedy this condition. The peak locations were correctly positioned in the PWLS-CT reconstructions. A notable feature of the PWLS-CT reconstructions was a significant reduction of highly irregular noise peaks found in the ART-CT reconstructions. However, the peak heights were slightly reduced in the PWLS-CT reconstructions due to the nature of the PWLS algorithm. PWLS could converge on the original concentration gradient even when a fairly high error was embedded into some experimentally measured path integral concentrations. It was also found in the simulation tests that the PWLS algorithm was very robust with respect to random errors in the path integral concentrations. This beam geometry and the use of a single OP-FTIR scanning system, in combination with the PWLS algorithm, is a system applicable to both environmental and industrial settings.

  12. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  13. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    PubMed

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2018-07-01

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  14. D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2015-12-01

    The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.

  15. Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin

    2015-03-01

    Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.

  16. Compensation of missing wedge effects with sequential statistical reconstruction in electron tomography.

    PubMed

    Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla

    2014-01-01

    Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.

  17. Magnetic resonance electrical impedance tomography (MREIT): simulation study of J-substitution algorithm.

    PubMed

    Kwon, Ohin; Woo, Eung Je; Yoon, Jeong-Rock; Seo, Jin Keun

    2002-02-01

    We developed a new image reconstruction algorithm for magnetic resonance electrical impedance tomography (MREIT). MREIT is a new EIT imaging technique integrated into magnetic resonance imaging (MRI) system. Based on the assumption that internal current density distribution is obtained using magnetic resonance imaging (MRI) technique, the new image reconstruction algorithm called J-substitution algorithm produces cross-sectional static images of resistivity (or conductivity) distributions. Computer simulations show that the spatial resolution of resistivity image is comparable to that of MRI. MREIT provides accurate high-resolution cross-sectional resistivity images making resistivity values of various human tissues available for many biomedical applications.

  18. Direct Detection of the Helical Magnetic Field Geometry from 3D Reconstruction of Prominence Knot Trajectories

    NASA Astrophysics Data System (ADS)

    Zapiór, Maciej; Martínez-Gómez, David

    2016-02-01

    Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1-3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 109 A.

  19. Faultfinder: A diagnostic expert system with graceful degradation for onboard aircraft applications

    NASA Technical Reports Server (NTRS)

    Abbott, Kathy H.; Schutte, Paul C.; Palmer, Michael T.; Ricks, Wendell R.

    1988-01-01

    A research effort was conducted to explore the application of artificial intelligence technology to automation of fault monitoring and diagnosis as an aid to the flight crew. Human diagnostic reasoning was analyzed and actual accident and incident cases were reconstructed. Based on this analysis and reconstruction, diagnostic concepts were conceived and implemented for an aircraft's engine and hydraulic subsystems. These concepts are embedded within a multistage approach to diagnosis that reasons about time-based, causal, and qualitative information, and enables a certain amount of graceful degradation. The diagnostic concepts are implemented in a computer program called Faultfinder that serves as a research prototype.

  20. 'S.W.' and C.G. Jung: mediumship, psychiatry and serial exemplarity.

    PubMed

    Shamdasani, Sonu

    2015-09-01

    On the basis of unpublished materials, this essay reconstructs Jung's seances with his cousin, Helene Preiswerk, which formed the basis of his 1902 medical dissertation, The Psychology and Pathology of so-called Occult Phenomena. It separates out Jung's contemporaneous approach to the mediumistic phenomena she exhibited from his subsequent sceptical psychological reworking of the case. It traces the reception of the work and its significance for his own self-experimentation from 1913 onwards. Finally, it reconstructs the manner in which Jung continually returned to his first model and reframed it as an exemplar of his developing theories. © The Author(s) 2015.

Top