Sample records for statistical reconstruction method

  1. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  2. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  3. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  4. Full statistical mode reconstruction of a light field via a photon-number-resolved measurement

    NASA Astrophysics Data System (ADS)

    Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.

    2017-05-01

    We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.

  5. Statistical reconstruction for cosmic ray muon tomography.

    PubMed

    Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J

    2007-08-01

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.

  6. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  7. MO-DE-207A-01: Impact of Statistical Weights On Detection of Low-Contrast Details in Model-Based Iterative CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noo, F; Guo, Z

    2016-06-15

    Purpose: Penalized-weighted least-square reconstruction has become an important research topic in CT, to reduce dose without affecting image quality. Two components impact image quality in this reconstruction: the statistical weights and the use of an edge-preserving penalty term. We are interested in assessing the influence of statistical weights on their own, without the edge-preserving feature. Methods: The influence of statistical weights on image quality was assessed in terms of low-contrast detail detection using LROC analysis. The task amounted to detect and localize a 6-mm lesion with random contrast inside the FORBILD head phantom. A two-alternative forced-choice experiment was used withmore » two human observers performing the task. Reconstructions without and with statistical weights were compared, both using the same quadratic penalty term. The beam energy was set to 30keV to amplify spatial differences in attenuation and thereby the role of statistical weights. A fan-beam data acquisition geometry was used. Results: Visual inspection of images clearly showed a difference in noise between the two reconstructions methods. As expected, the reconstruction without statistical weights exhibited noise streaks. The other reconstruction appeared better in this aspect, but presented other disturbing noise patterns and artifacts induced by the weights. The LROC analysis yield the following 95-percent confidence interval for the difference in reader-averaged AUC (reconstruction without weights minus reconstruction with weights): [0.0026,0.0599]. The mean AUC value was 0.9094. Conclusion: We have investigated the impact of statistical weights without the use of edge-preserving penalty in penalized weighted least-square reconstruction. A decrease rather than increase in image quality was observed when using statistical weights. Thus, the observers were better able to cope with the noise streaks than the noise patterns and artifacts induced by the statistical weights. It may be that different results would be obtained if the penalty term was used with a pixel-dependent weight. F Noo receives research support from Siemens Healthcare GmbH.« less

  8. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  9. Virtual reconstruction of glenoid bone defects using a statistical shape model.

    PubMed

    Plessers, Katrien; Vanden Berghe, Peter; Van Dijck, Christophe; Wirix-Speetjens, Roel; Debeer, Philippe; Jonkers, Ilse; Vander Sloten, Jos

    2018-01-01

    Description of the native shape of a glenoid helps surgeons to preoperatively plan the position of a shoulder implant. A statistical shape model (SSM) can be used to virtually reconstruct a glenoid bone defect and to predict the inclination, version, and center position of the native glenoid. An SSM-based reconstruction method has already been developed for acetabular bone reconstruction. The goal of this study was to evaluate the SSM-based method for the reconstruction of glenoid bone defects and the prediction of native anatomic parameters. First, an SSM was created on the basis of 66 healthy scapulae. Then, artificial bone defects were created in all scapulae and reconstructed using the SSM-based reconstruction method. For each bone defect, the reconstructed surface was compared with the original surface. Furthermore, the inclination, version, and glenoid center point of the reconstructed surface were compared with the original parameters of each scapula. For small glenoid bone defects, the healthy surface of the glenoid was reconstructed with a root mean square error of 1.2 ± 0.4 mm. Inclination, version, and glenoid center point were predicted with an accuracy of 2.4° ± 2.1°, 2.9° ± 2.2°, and 1.8 ± 0.8 mm, respectively. The SSM-based reconstruction method is able to accurately reconstruct the native glenoid surface and to predict the native anatomic parameters. Based on this outcome, statistical shape modeling can be considered a successful technique for use in the preoperative planning of shoulder arthroplasty. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  10. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  11. Compensation of missing wedge effects with sequential statistical reconstruction in electron tomography.

    PubMed

    Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla

    2014-01-01

    Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.

  12. Statistical inference approach to structural reconstruction of complex networks from binary time series

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  13. Statistical inference approach to structural reconstruction of complex networks from binary time series.

    PubMed

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  14. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  15. Initial evaluation of discrete orthogonal basis reconstruction of ECT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, E.B.; Donohue, K.D.

    1996-12-31

    Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less

  16. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  17. Markov chain Monte Carlo estimation of quantum states

    NASA Astrophysics Data System (ADS)

    Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman

    2009-03-01

    We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.

  18. Crossing statistic: reconstructing the expansion history of the universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman, E-mail: arman@ewha.ac.kr

    2012-08-01

    We present that by combining Crossing Statistic [1,2] and Smoothing method [3-5] one can reconstruct the expansion history of the universe with a very high precision without considering any prior on the cosmological quantities such as the equation of state of dark energy. We show that the presented method performs very well in reconstruction of the expansion history of the universe independent of the underlying models and it works well even for non-trivial dark energy models with fast or slow changes in the equation of state of dark energy. Accuracy of the reconstructed quantities along with independence of the methodmore » to any prior or assumption gives the proposed method advantages to the other non-parametric methods proposed before in the literature. Applying on the Union 2.1 supernovae combined with WiggleZ BAO data we present the reconstructed results and test the consistency of the two data sets in a model independent manner. Results show that latest available supernovae and BAO data are in good agreement with each other and spatially flat ΛCDM model is in concordance with the current data.« less

  19. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H

    2014-06-15

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less

  20. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  1. Evaluating image reconstruction methods for tumor detection performance in whole-body PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard

    2000-04-01

    This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.

  2. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  3. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  4. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  5. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  6. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  7. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  8. 3D Representative Volume Element Reconstruction of Fiber Composites via Orientation Tensor and Substructure Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yi; Chen, Wei; Xu, Hongyi

    To provide a seamless integration of manufacturing processing simulation and fiber microstructure modeling, two new stochastic 3D microstructure reconstruction methods are proposed for two types of random fiber composites: random short fiber composites, and Sheet Molding Compounds (SMC) chopped fiber composites. A Random Sequential Adsorption (RSA) algorithm is first developed to embed statistical orientation information into 3D RVE reconstruction of random short fiber composites. For the SMC composites, an optimized Voronoi diagram based approach is developed for capturing the substructure features of SMC chopped fiber composites. The proposed methods are distinguished from other reconstruction works by providing a way ofmore » integrating statistical information (fiber orientation tensor) obtained from material processing simulation, as well as capturing the multiscale substructures of the SMC composites.« less

  9. Reconstruction of three-dimensional porous media using a single thin section

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2012-06-01

    The purpose of any reconstruction method is to generate realizations of two- or multiphase disordered media that honor limited data for them, with the hope that the realizations provide accurate predictions for those properties of the media for which there are no data available, or their measurement is difficult. An important example of such stochastic systems is porous media for which the reconstruction technique must accurately represent their morphology—the connectivity and geometry—as well as their flow and transport properties. Many of the current reconstruction methods are based on low-order statistical descriptors that fail to provide accurate information on the properties of heterogeneous porous media. On the other hand, due to the availability of high resolution two-dimensional (2D) images of thin sections of a porous medium, and at the same time, the high cost, computational difficulties, and even unavailability of complete 3D images, the problem of reconstructing porous media from 2D thin sections remains an outstanding unsolved problem. We present a method based on multiple-point statistics in which a single 2D thin section of a porous medium, represented by a digitized image, is used to reconstruct the 3D porous medium to which the thin section belongs. The method utilizes a 1D raster path for inspecting the digitized image, and combines it with a cross-correlation function, a grid splitting technique for deciding the resolution of the computational grid used in the reconstruction, and the Shannon entropy as a measure of the heterogeneity of the porous sample, in order to reconstruct the 3D medium. It also utilizes an adaptive technique for identifying the locations and optimal number of hard (quantitative) data points that one can use in the reconstruction process. The method is tested on high resolution images for Berea sandstone and a carbonate rock sample, and the results are compared with the data. To make the comparison quantitative, two sets of statistical tests consisting of the autocorrelation function, histogram matching of the local coordination numbers, the pore and throat size distributions, multiple-points connectivity, and single- and two-phase flow permeabilities are used. The comparison indicates that the proposed method reproduces the long-range connectivity of the porous media, with the computed properties being in good agreement with the data for both porous samples. The computational efficiency of the method is also demonstrated.

  10. Fallacies and fantasies: the theoretical underpinnings of the Coexistence Approach for palaeoclimate reconstruction

    NASA Astrophysics Data System (ADS)

    Grimm, G. W.; Potts, A. J.

    2015-12-01

    The Coexistence Approach has been used infer palaeoclimates for many Eurasian fossil plant assemblage. However, the theory that underpins the method has never been examined in detail. Here we discuss acknowledged and implicit assumptions, and assess the statistical nature and pseudo-logic of the method. We also compare the Coexistence Approach theory with the active field of species distribution modelling. We argue that the assumptions will inevitably be violated to some degree and that the method has no means to identify and quantify these violations. The lack of a statistical framework makes the method highly vulnerable to the vagaries of statistical outliers and exotic elements. In addition, we find numerous logical inconsistencies, such as how climate shifts are quantified (the use of a "center value" of a coexistence interval) and the ability to reconstruct "extinct" climates from modern plant distributions. Given the problems that have surfaced in species distribution modelling, accurate and precise quantitative reconstructions of palaeoclimates (or even climate shifts) using the nearest-living-relative principle and rectilinear niches (the basis of the method) will not be possible. The Coexistence Approach can be summarised as an exercise that shoe-horns a plant fossil assemblages into coexistence and then naively assumes that this must be the climate. Given the theoretical issues, and methodological issues highlighted elsewhere, we suggest that the method be discontinued and that all past reconstructions be disregarded and revisited using less fallacious methods.

  11. Fallacies and fantasies: the theoretical underpinnings of the Coexistence Approach for palaeoclimate reconstruction

    NASA Astrophysics Data System (ADS)

    Grimm, Guido W.; Potts, Alastair J.

    2016-03-01

    The Coexistence Approach has been used to infer palaeoclimates for many Eurasian fossil plant assemblages. However, the theory that underpins the method has never been examined in detail. Here we discuss acknowledged and implicit assumptions and assess the statistical nature and pseudo-logic of the method. We also compare the Coexistence Approach theory with the active field of species distribution modelling. We argue that the assumptions will inevitably be violated to some degree and that the method lacks any substantive means to identify or quantify these violations. The absence of a statistical framework makes the method highly vulnerable to the vagaries of statistical outliers and exotic elements. In addition, we find numerous logical inconsistencies, such as how climate shifts are quantified (the use of a "centre value" of a coexistence interval) and the ability to reconstruct "extinct" climates from modern plant distributions. Given the problems that have surfaced in species distribution modelling, accurate and precise quantitative reconstructions of palaeoclimates (or even climate shifts) using the nearest-living-relative principle and rectilinear niches (the basis of the method) will not be possible. The Coexistence Approach can be summarised as an exercise that shoehorns a plant fossil assemblage into coexistence and then assumes that this must be the climate. Given the theoretical issues and methodological issues highlighted elsewhere, we suggest that the method be discontinued and that all past reconstructions be disregarded and revisited using less fallacious methods. We outline six steps for (further) validation of available and future taxon-based methods and advocate developing (semi-quantitative) methods that prioritise robustness over precision.

  12. Reconstruction of spatio-temporal temperature from sparse historical records using robust probabilistic principal component regression

    USGS Publications Warehouse

    Tipton, John; Hooten, Mevin B.; Goring, Simon

    2017-01-01

    Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.

  13. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  14. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  15. PET image reconstruction: a robust state space approach.

    PubMed

    Liu, Huafeng; Tian, Yi; Shi, Pengcheng

    2005-01-01

    Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.

  16. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    PubMed

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  17. A feature refinement approach for statistical interior CT reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-01

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  18. A feature refinement approach for statistical interior CT reconstruction.

    PubMed

    Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong

    2016-07-21

    Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.

  19. Evaluation of accelerated iterative x-ray CT image reconstruction using floating point graphics hardware.

    PubMed

    Kole, J S; Beekman, F J

    2006-02-21

    Statistical reconstruction methods offer possibilities to improve image quality as compared with analytical methods, but current reconstruction times prohibit routine application in clinical and micro-CT. In particular, for cone-beam x-ray CT, the use of graphics hardware has been proposed to accelerate the forward and back-projection operations, in order to reduce reconstruction times. In the past, wide application of this texture hardware mapping approach was hampered owing to limited intrinsic accuracy. Recently, however, floating point precision has become available in the latest generation commodity graphics cards. In this paper, we utilize this feature to construct a graphics hardware accelerated version of the ordered subset convex reconstruction algorithm. The aims of this paper are (i) to study the impact of using graphics hardware acceleration for statistical reconstruction on the reconstructed image accuracy and (ii) to measure the speed increase one can obtain by using graphics hardware acceleration. We compare the unaccelerated algorithm with the graphics hardware accelerated version, and for the latter we consider two different interpolation techniques. A simulation study of a micro-CT scanner with a mathematical phantom shows that at almost preserved reconstructed image accuracy, speed-ups of a factor 40 to 222 can be achieved, compared with the unaccelerated algorithm, and depending on the phantom and detector sizes. Reconstruction from physical phantom data reconfirms the usability of the accelerated algorithm for practical cases.

  20. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  1. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping

    2015-09-15

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less

  2. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  3. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  4. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  5. Reconstruction of a Real World Social Network using the Potts Model and Loopy Belief Propagation.

    PubMed

    Bisconti, Cristian; Corallo, Angelo; Fortunato, Laura; Gentile, Antonio A; Massafra, Andrea; Pellè, Piergiuseppe

    2015-01-01

    The scope of this paper is to test the adoption of a statistical model derived from Condensed Matter Physics, for the reconstruction of the structure of a social network. The inverse Potts model, traditionally applied to recursive observations of quantum states in an ensemble of particles, is here addressed to observations of the members' states in an organization and their (anti)correlations, thus inferring interactions as links among the members. Adopting proper (Bethe) approximations, such an inverse problem is showed to be tractable. Within an operational framework, this network-reconstruction method is tested for a small real-world social network, the Italian parliament. In this study case, it is easy to track statuses of the parliament members, using (co)sponsorships of law proposals as the initial dataset. In previous studies of similar activity-based networks, the graph structure was inferred directly from activity co-occurrences: here we compare our statistical reconstruction with such standard methods, outlining discrepancies and advantages.

  6. Reconstruction of a Real World Social Network using the Potts Model and Loopy Belief Propagation

    PubMed Central

    Bisconti, Cristian; Corallo, Angelo; Fortunato, Laura; Gentile, Antonio A.; Massafra, Andrea; Pellè, Piergiuseppe

    2015-01-01

    The scope of this paper is to test the adoption of a statistical model derived from Condensed Matter Physics, for the reconstruction of the structure of a social network. The inverse Potts model, traditionally applied to recursive observations of quantum states in an ensemble of particles, is here addressed to observations of the members' states in an organization and their (anti)correlations, thus inferring interactions as links among the members. Adopting proper (Bethe) approximations, such an inverse problem is showed to be tractable. Within an operational framework, this network-reconstruction method is tested for a small real-world social network, the Italian parliament. In this study case, it is easy to track statuses of the parliament members, using (co)sponsorships of law proposals as the initial dataset. In previous studies of similar activity-based networks, the graph structure was inferred directly from activity co-occurrences: here we compare our statistical reconstruction with such standard methods, outlining discrepancies and advantages. PMID:26617539

  7. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol

    PubMed Central

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611

  8. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE PAGES

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; ...

    2018-05-15

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  9. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey, N.; et al.

    2018-01-26

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  10. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in themore » density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged]« less

  11. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  12. Impact of joint statistical dual-energy CT reconstruction of proton stopping power images: Comparison to image- and sinogram-domain material decomposition approaches.

    PubMed

    Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2018-05-01

    The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.

  13. Planning of skull reconstruction based on a statistical shape model combined with geometric morphometrics.

    PubMed

    Fuessinger, Marc Anton; Schwarz, Steffen; Cornelius, Carl-Peter; Metzger, Marc Christian; Ellis, Edward; Probst, Florian; Semper-Hogg, Wiebke; Gass, Mathieu; Schlager, Stefan

    2018-04-01

    Virtual reconstruction of large cranial defects is still a challenging task. The current reconstruction procedures depend on the surgeon's experience and skills in planning the reconstruction based on mirroring and manual adaptation. The aim of this study is to propose and evaluate a computer-based approach employing a statistical shape model (SSM) of the cranial vault. An SSM was created based on 131 CT scans of pathologically unaffected adult crania. After segmentation, the resulting surface mesh of one patient was established as template and subsequently registered to the entire sample. Using the registered surface meshes, an SSM was generated capturing the shape variability of the cranial vault. The knowledge about this shape variation in healthy patients was used to estimate the missing parts. The accuracy of the reconstruction was evaluated by using 31 CT scans not included in the SSM. Both unilateral and bilateral bony defects were created on each skull. The reconstruction was performed using the current gold standard of mirroring the intact to the affected side, and the result was compared to the outcome of our proposed SSM-driven method. The accuracy of the reconstruction was determined by calculating the distances to the corresponding parts on the intact skull. While unilateral defects could be reconstructed with both methods, the reconstruction of bilateral defects was, for obvious reasons, only possible employing the SSM-based method. Comparing all groups, the analysis shows a significantly higher precision of the SSM group, with a mean error of 0.47 mm compared to the mirroring group which exhibited a mean error of 1.13 mm. Reconstructions of bilateral defects yielded only slightly higher estimation errors than those of unilateral defects. The presented computer-based approach using SSM is a precise and simple tool in the field of computer-assisted surgery. It helps to reconstruct large-size defects of the skull considering the natural asymmetry of the cranium and is not limited to unilateral defects.

  14. Preliminary frequency-domain analysis for the reconstructed spatial resolution of muon tomography

    NASA Astrophysics Data System (ADS)

    Yu, B.; Zhao, Z.; Wang, X.; Wang, Y.; Wu, D.; Zeng, Z.; Zeng, M.; Yi, H.; Luo, Z.; Yue, X.; Cheng, J.

    2014-11-01

    Muon tomography is an advanced technology to non-destructively detect high atomic number materials. It exploits the multiple Coulomb scattering information of muon to reconstruct the scattering density image of the traversed object. Because of the statistics of muon scattering, the measurement error of system and the data incompleteness, the reconstruction is always accompanied with a certain level of interference, which will influence the reconstructed spatial resolution. While statistical noises can be reduced by extending the measuring time, system parameters determine the ultimate spatial resolution that one system can reach. In this paper, an effective frequency-domain model is proposed to analyze the reconstructed spatial resolution of muon tomography. The proposed method modifies the resolution analysis in conventional computed tomography (CT) to fit the different imaging mechanism in muon scattering tomography. The measured scattering information is described in frequency domain, then a relationship between the measurements and the original image is proposed in Fourier domain, which is named as "Muon Central Slice Theorem". Furthermore, a preliminary analytical expression of the ultimate reconstructed spatial is derived, and the simulations are performed for validation. While the method is able to predict the ultimate spatial resolution of a given system, it can also be utilized for the optimization of system design and construction.

  15. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  16. Detection of calcification clusters in digital breast tomosynthesis slices at different dose levels utilizing a SRSAR reconstruction and JAFROC

    NASA Astrophysics Data System (ADS)

    Timberg, P.; Dustler, M.; Petersson, H.; Tingberg, A.; Zackrisson, S.

    2015-03-01

    Purpose: To investigate detection performance for calcification clusters in reconstructed digital breast tomosynthesis (DBT) slices at different dose levels using a Super Resolution and Statistical Artifact Reduction (SRSAR) reconstruction method. Method: Simulated calcifications with irregular profile (0.2 mm diameter) where combined to form clusters that were added to projection images (1-3 per abnormal image) acquired on a DBT system (Mammomat Inspiration, Siemens). The projection images were dose reduced by software to form 35 abnormal cases and 25 normal cases as if acquired at 100%, 75% and 50% dose level (AGD of approximately 1.6 mGy for a 53 mm standard breast, measured according to EUREF v0.15). A standard FBP and a SRSAR reconstruction method (utilizing IRIS (iterative reconstruction filters), and outlier detection using Maximum-Intensity Projections and Average-Intensity Projections) were used to reconstruct single central slices to be used in a Free-response task (60 images per observer and dose level). Six observers participated and their task was to detect the clusters and assign confidence rating in randomly presented images from the whole image set (balanced by dose level). Each trial was separated by one weeks to reduce possible memory bias. The outcome was analyzed for statistical differences using Jackknifed Alternative Free-response Receiver Operating Characteristics. Results: The results indicate that it is possible reduce the dose by 50% with SRSAR without jeopardizing cluster detection. Conclusions: The detection performance for clusters can be maintained at a lower dose level by using SRSAR reconstruction.

  17. Impact of posterior rhabdosphincter reconstruction during robot-assisted radical prostatectomy: retrospective analysis of time to continence.

    PubMed

    Woo, Jason R; Shikanov, Sergey; Zorn, Kevin C; Shalhav, Arieh L; Zagaja, Gregory P

    2009-12-01

    Posterior rhabdosphincter (PR) reconstruction during robot-assisted radical prostatectomy (RARP) was introduced in an attempt to improve postoperative continence. In the present study, we evaluate time to achieve continence in patients who are undergoing RARP with and without PR reconstruction. A prospective RARP database was searched for most recent cases that were accomplished with PR reconstruction (group 1, n = 69) or with standard technique (group 2, n = 63). We performed the analysis applying two definitions of continence: 0 pads per day or 0-1 security pad per day. Patients were evaluated by telephone interview. Statistical analysis was carried out using the Kaplan-Meier method and log-rank test. With PR reconstruction, continence was improved when defined as 0-1 security pad per day (median time of 90 vs 150 days; P = 0.01). This difference did not achieve statistical significance when continence was defined as 0 pads per day (P = 0.12). A statistically significant improvement in continence rate and time to achieve continence is seen in patients who are undergoing PR reconstruction during RARP, with continence defined as 0-1 security/safety pad per day. A larger, prospective and randomized study is needed to better understand the impact of this technique on postoperative continence.

  18. Noise correlation in PET, CT, SPECT and PET/CT data evaluated using autocorrelation function: a phantom study on data, reconstructed using FBP and OSEM.

    PubMed

    Razifar, Pasha; Sandström, Mattias; Schnieder, Harald; Långström, Bengt; Maripuu, Enn; Bengtsson, Ewert; Bergström, Mats

    2005-08-25

    Positron Emission Tomography (PET), Computed Tomography (CT), PET/CT and Single Photon Emission Tomography (SPECT) are non-invasive imaging tools used for creating two dimensional (2D) cross section images of three dimensional (3D) objects. PET and SPECT have the potential of providing functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules, whereas CT visualizes X-ray density in tissues in the body. PET/CT provides fused images representing both functional and anatomical information with better precision in localization than PET alone. Images generated by these types of techniques are generally noisy, thereby impairing the imaging potential and affecting the precision in quantitative values derived from the images. It is crucial to explore and understand the properties of noise in these imaging techniques. Here we used autocorrelation function (ACF) specifically to describe noise correlation and its non-isotropic behaviour in experimentally generated images of PET, CT, PET/CT and SPECT. Experiments were performed using phantoms with different shapes. In PET and PET/CT studies, data were acquired in 2D acquisition mode and reconstructed by both analytical filter back projection (FBP) and iterative, ordered subsets expectation maximisation (OSEM) methods. In the PET/CT studies, different magnitudes of X-ray dose in the transmission were employed by using different mA settings for the X-ray tube. In the CT studies, data were acquired using different slice thickness with and without applied dose reduction function and the images were reconstructed by FBP. SPECT studies were performed in 2D, reconstructed using FBP and OSEM, using post 3D filtering. ACF images were generated from the primary images, and profiles across the ACF images were used to describe the noise correlation in different directions. The variance of noise across the images was visualised as images and with profiles across these images. The most important finding was that the pattern of noise correlation is rotation symmetric or isotropic, independent of object shape in PET and PET/CT images reconstructed using the iterative method. This is, however, not the case in FBP images when the shape of phantom is not circular. Also CT images reconstructed using FBP show the same non-isotropic pattern independent of slice thickness and utilization of care dose function. SPECT images show an isotropic correlation of the noise independent of object shape or applied reconstruction algorithm. Noise in PET/CT images was identical independent of the applied X-ray dose in the transmission part (CT), indicating that the noise from transmission with the applied doses does not propagate into the PET images showing that the noise from the emission part is dominant. The results indicate that in human studies it is possible to utilize a low dose in transmission part while maintaining the noise behaviour and the quality of the images. The combined effect of noise correlation for asymmetric objects and a varying noise variance across the image field significantly complicates the interpretation of the images when statistical methods are used, such as with statistical estimates of precision in average values, use of statistical parametric mapping methods and principal component analysis. Hence it is recommended that iterative reconstruction methods are used for such applications. However, it is possible to calculate the noise analytically in images reconstructed by FBP, while it is not possible to do the same calculation in images reconstructed by iterative methods. Therefore for performing statistical methods of analysis which depend on knowing the noise, FBP would be preferred.

  19. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  2. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  3. Methods to Approach Velocity Data Reduction and Their Effects on Conformation Statistics in Viscoelastic Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas

    2009-03-01

    Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.

  4. Muon tomography imaging algorithms for nuclear threat detection inside large volume containers with the Muon Portal detector

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.

    2013-11-01

    Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.

  5. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  6. Comparison among Reconstruction Algorithms for Quantitative Analysis of 11C-Acetate Cardiac PET Imaging.

    PubMed

    Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li

    2018-01-01

    Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.

  7. Direct reconstruction of pharmacokinetic parameters in dynamic fluorescence molecular tomography by the augmented Lagrangian method

    NASA Astrophysics Data System (ADS)

    Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing

    2016-03-01

    Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.

  8. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  9. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  10. Evaluation of a Fully 3-D Bpf Method for Small Animal PET Images on Mimd Architectures

    NASA Astrophysics Data System (ADS)

    Bevilacqua, A.

    Positron Emission Tomography (PET) images can be reconstructed using Fourier transform methods. This paper describes the performance of a fully 3-D Backprojection-Then-Filter (BPF) algorithm on the Cray T3E machine and on a cluster of workstations. PET reconstruction of small animals is a class of problems characterized by poor counting statistics. The low-count nature of these studies necessitates 3-D reconstruction in order to improve the sensitivity of the PET system: by including axially oblique Lines Of Response (LORs), the sensitivity of the system can be significantly improved by the 3-D acquisition and reconstruction. The BPF method is widely used in clinical studies because of its speed and easy implementation. Moreover, the BPF method is suitable for on-time 3-D reconstruction as it does not need any sinogram or rearranged data. In order to investigate the possibility of on-line processing, we reconstruct a phantom using the data stored in the list-mode format by the data acquisition system. We show how the intrinsically parallel nature of the BPF method makes it suitable for on-line reconstruction on a MIMD system such as the Cray T3E. Lastly, we analyze the performance of this algorithm on a cluster of workstations.

  11. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    NASA Astrophysics Data System (ADS)

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2018-05-01

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

  12. Reconstructing ice-age palaeoclimates: Quantifying low-CO2 effects on plants

    NASA Astrophysics Data System (ADS)

    Prentice, I. C.; Cleator, S. F.; Huang, Y. H.; Harrison, S. P.; Roulstone, I.

    2017-02-01

    We present a novel method to quantify the ecophysiological effects of changes in CO2 concentration during the reconstruction of climate changes from fossil pollen assemblages. The method does not depend on any particular vegetation model. Instead, it makes use of general equations from ecophysiology and hydrology that link moisture index (MI) to transpiration and the ratio of leaf-internal to ambient CO2 (χ). Statistically reconstructed MI values are corrected post facto for effects of CO2 concentration. The correction is based on the principle that e, the rate of water loss per unit carbon gain, should be inversely related to effective moisture availability as sensed by plants. The method involves solving a non-linear equation that relates e to MI, temperature and CO2 concentration via the Fu-Zhang relation between evapotranspiration and MI, Monteith's empirical relationship between vapour pressure deficit and evapotranspiration, and recently developed theory that predicts the response of χ to vapour pressure deficit and temperature. The solution to this equation provides a correction term for MI. The numerical value of the correction depends on the reconstructed MI. It is slightly sensitive to temperature, but primarily sensitive to CO2 concentration. Under low LGM CO2 concentration the correction is always positive, implying that LGM climate was wetter than it would seem from vegetation composition. A statistical reconstruction of last glacial maximum (LGM, 21±1 kyr BP) palaeoclimates, based on a new compilation of modern and LGM pollen assemblage data from Australia, is used to illustrate the method in practice. Applying the correction brings pollen-reconstructed LGM moisture availability in southeastern Australia better into line with palaeohydrological estimates of LGM climate.

  13. Angular reconstitution-based 3D reconstructions of nanomolecular structures from superresolution light-microscopy images

    PubMed Central

    Salas, Desirée; Le Gall, Antoine; Fiche, Jean-Bernard; Valeri, Alessandro; Ke, Yonggang; Bron, Patrick; Bellot, Gaetan

    2017-01-01

    Superresolution light microscopy allows the imaging of labeled supramolecular assemblies at a resolution surpassing the classical diffraction limit. A serious limitation of the superresolution approach is sample heterogeneity and the stochastic character of the labeling procedure. To increase the reproducibility and the resolution of the superresolution results, we apply multivariate statistical analysis methods and 3D reconstruction approaches originally developed for cryogenic electron microscopy of single particles. These methods allow for the reference-free 3D reconstruction of nanomolecular structures from two-dimensional superresolution projection images. Since these 2D projection images all show the structure in high-resolution directions of the optical microscope, the resulting 3D reconstructions have the best possible isotropic resolution in all directions. PMID:28811371

  14. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    PubMed

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-07-01

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. The effects of iterative reconstruction in CT on low-contrast liver lesion volumetry: a phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas

    2017-03-01

    Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.

  16. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  17. Five Centuries of Tree Ring Reconstructed Streamflow and Projections for Future Water Risk over the Upper Indus Watershed

    NASA Astrophysics Data System (ADS)

    Rao, M. P.; Cook, E. R.; Cook, B.; Palmer, J. G.; Uriarte, M.; Devineni, N.; Lall, U.; D'Arrigo, R.; Woodhouse, C. A.; Ahmed, M.

    2017-12-01

    We present tree-ring reconstructions of streamflow at seven gauges in the Upper Indus River watershed over the past five centuries (1452-2008 C.E.) using Hierarchical Bayesian Regression (HBR) with partial pooling of information across gauges. Using HBR with partial pooling we can develop reconstructions for short gauge records with interspersed missing data. This overcomes a common limitation faced when using conventional tree-ring reconstruction methods such as point-by-point regression (PPR) in remote regions in developing countries. Six of these streamflow gauge reconstructions are produced for the first time while a reconstruction at one streamflow gauge has been previously produced using PPR. These new reconstructions are used to characterize long-term flow variability and drought risk in the region. For the one gauge where a prior reconstruction exists, the reconstruction of streamflow by HBR and the more traditional PPR are nearly identical and yield comparable uncertainty estimates and reconstruction skill statistics. These results highlight that tree-ring reconstructions of streamflow are not dependent on the choice of statistical method. We find that streamflow in the region peaks between May-September, and is primarily driven by a combination of winter (January-March) precipitation and summer (May-September) temperature, with summer temperature likely guiding the rate of snow and glacial melt. Our reconstructions indicate that current flow since the 1980s are higher than mean flow for the past five centuries at five out of seven gauges in the watershed. The increased flow is likely driven by enhanced rates of snow and glacial melt and regional wetting over recent decades. These results suggest that while in the near-term streamflow is expected to increase, future water risk in the region will be dependent on changes in snowfall and glacial mass balance due to projected warming.

  18. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  19. MR Guided PET Image Reconstruction

    PubMed Central

    Bai, Bing; Li, Quanzheng; Leahy, Richard M.

    2013-01-01

    The resolution of PET images is limited by the physics of positron-electron annihilation and instrumentation for photon coincidence detection. Model based methods that incorporate accurate physical and statistical models have produced significant improvements in reconstructed image quality when compared to filtered backprojection reconstruction methods. However, it has often been suggested that by incorporating anatomical information, the resolution and noise properties of PET images could be improved, leading to better quantitation or lesion detection. With the recent development of combined MR-PET scanners, it is possible to collect intrinsically co-registered MR images. It is therefore now possible to routinely make use of anatomical information in PET reconstruction, provided appropriate methods are available. In this paper we review research efforts over the past 20 years to develop these methods. We discuss approaches based on the use of both Markov random field priors and joint information or entropy measures. The general framework for these methods is described and their performance and longer term potential and limitations discussed. PMID:23178087

  20. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  1. Automated neurovascular tracing and analysis of the knife-edge scanning microscope Rat Nissl data set using a computing cluster.

    PubMed

    Sungjun Lim; Nowak, Michael R; Yoonsuck Choe

    2016-08-01

    We present a novel, parallelizable algorithm capable of automatically reconstructing and calculating anatomical statistics of cerebral vascular networks embedded in large volumes of Rat Nissl-stained data. In this paper, we report the results of our method using Rattus somatosensory cortical data acquired using Knife-Edge Scanning Microscopy. Our algorithm performs the reconstruction task with averaged precision, recall, and F2-score of 0.978, 0.892, and 0.902 respectively. Calculated anatomical statistics show some conformance to values previously reported. The results that can be obtained from our method are expected to help explicate the relationship between the structural organization of the microcirculation and normal (and abnormal) cerebral functioning.

  2. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  3. Photon-number statistics of twin beams: Self-consistent measurement, reconstruction, and properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peřina, Jan Jr.; Haderka, Ondřej; Michálek, Václav

    2014-12-04

    A method for the determination of photon-number statistics of twin beams using the joint signal-idler photocount statistics obtained by an iCCD camera is described. It also provides absolute quantum detection efficiency of the camera. Using the measured photocount statistics, quasi-distributions of integrated intensities are obtained. They attain negative values occurring in characteristic strips an a consequence of pairing of photons in twin beams.

  4. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  5. Reconstructing El Niño Southern Oscillation using data from ships' logbooks, 1815-1854. Part I: methodology and evaluation

    NASA Astrophysics Data System (ADS)

    Barrett, Hannah G.; Jones, Julie M.; Bigg, Grant R.

    2018-02-01

    The meteorological information found within ships' logbooks is a unique and fascinating source of data for historical climatology. This study uses wind observations from logbooks covering the period 1815 to 1854 to reconstruct an index of El Niño Southern Oscillation (ENSO) for boreal winter (DJF). Statistically-based reconstructions of the Southern Oscillation Index (SOI) are obtained using two methods: principal component regression (PCR) and composite-plus-scale (CPS). Calibration and validation are carried out over the modern period 1979-2014, assessing the relationship between re-gridded seasonal ERA-Interim reanalysis wind data and the instrumental SOI. The reconstruction skill of both the PCR and CPS methods is found to be high with reduction of error skill scores of 0.80 and 0.75, respectively. The relationships derived during the fitting period are then applied to the logbook wind data to reconstruct the historical SOI. We develop a new method to assess the sensitivity of the reconstructions to using a limited number of observations per season and find that the CPS method performs better than PCR with a limited number of observations. A difference in the distribution of wind force terms used by British and Dutch ships is found, and its impact on the reconstruction assessed. The logbook reconstructions agree well with a previous SOI reconstructed from Jakarta rain day counts, 1830-1850, adding robustness to our reconstructions. Comparisons to additional documentary and proxy data sources are provided in a companion paper.

  6. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    PubMed

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  7. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  8. Statistical distributions of ultra-low dose CT sinograms and their fundamental limits

    NASA Astrophysics Data System (ADS)

    Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.

    2017-03-01

    Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream

  9. Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy

    NASA Astrophysics Data System (ADS)

    Harel, Elad

    2018-05-01

    A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.

  10. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  11. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  13. A model-based approach to wildland fire reconstruction using sediment charcoal records

    USGS Publications Warehouse

    Itter, Malcolm S.; Finley, Andrew O.; Hooten, Mevin B.; Higuera, Philip E.; Marlon, Jennifer R.; Kelly, Ryan; McLachlan, Jason S.

    2017-01-01

    Lake sediment charcoal records are used in paleoecological analyses to reconstruct fire history, including the identification of past wildland fires. One challenge of applying sediment charcoal records to infer fire history is the separation of charcoal associated with local fire occurrence and charcoal originating from regional fire activity. Despite a variety of methods to identify local fires from sediment charcoal records, an integrated statistical framework for fire reconstruction is lacking. We develop a Bayesian point process model to estimate the probability of fire associated with charcoal counts from individual-lake sediments and estimate mean fire return intervals. A multivariate extension of the model combines records from multiple lakes to reduce uncertainty in local fire identification and estimate a regional mean fire return interval. The univariate and multivariate models are applied to 13 lakes in the Yukon Flats region of Alaska. Both models resulted in similar mean fire return intervals (100–350 years) with reduced uncertainty under the multivariate model due to improved estimation of regional charcoal deposition. The point process model offers an integrated statistical framework for paleofire reconstruction and extends existing methods to infer regional fire history from multiple lake records with uncertainty following directly from posterior distributions.

  14. Multistatic synthetic aperture radar image formation.

    PubMed

    Krishnan, V; Swoboda, J; Yarman, C E; Yazici, B

    2010-05-01

    In this paper, we consider a multistatic synthetic aperture radar (SAR) imaging scenario where a swarm of airborne antennas, some of which are transmitting, receiving or both, are traversing arbitrary flight trajectories and transmitting arbitrary waveforms without any form of multiplexing. The received signal at each receiving antenna may be interfered by the scattered signal due to multiple transmitters and additive thermal noise at the receiver. In this scenario, standard bistatic SAR image reconstruction algorithms result in artifacts in reconstructed images due to these interferences. In this paper, we use microlocal analysis in a statistical setting to develop a filtered-backprojection (FBP) type analytic image formation method that suppresses artifacts due to interference while preserving the location and orientation of edges of the scene in the reconstructed image. Our FBP-type algorithm exploits the second-order statistics of the target and noise to suppress the artifacts due to interference in a mean-square sense. We present numerical simulations to demonstrate the performance of our multistatic SAR image formation algorithm with the FBP-type bistatic SAR image reconstruction algorithm. While we mainly focus on radar applications, our image formation method is also applicable to other problems arising in fields such as acoustic, geophysical and medical imaging.

  15. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  16. Percolation Analysis of a Wiener Reconstruction of the IRAS 1.2 Jy Redshift Catalog

    NASA Astrophysics Data System (ADS)

    Yess, Capp; Shandarin, Sergei F.; Fisher, Karl B.

    1997-01-01

    We present percolation analyses of Wiener reconstructions of the IRAS 1.2 Jy redshift survey. There are 10 reconstructions of galaxy density fields in real space spanning the range β = 0.1-1.0, where β = Ω0.6/b, Ω is the present dimensionless density, and b is the bias factor. Our method uses the growth of the largest cluster statistic to characterize the topology of a density field, where Gaussian randomized versions of the reconstructions are used as standards for analysis. For the reconstruction volume of radius R ~ 100 h-1 Mpc, percolation analysis reveals a slight ``meatball'' topology for the real space, galaxy distribution of the IRAS survey.

  17. Sea level reconstructions from altimetry and tide gauges using independent component analysis

    NASA Astrophysics Data System (ADS)

    Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan

    2017-04-01

    Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.

  18. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    PubMed

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  19. Markov prior-based block-matching algorithm for superdimension reconstruction of porous media

    NASA Astrophysics Data System (ADS)

    Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong

    2018-04-01

    A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.

  20. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique

    PubMed Central

    Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-01-01

    Objective: To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. Methods: 27 consecutive patients (mean body mass index: 23.55 kg m−2 underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. Results: At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19–49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Conclusion: Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. Advances in knowledge: This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option. PMID:26234823

  1. The Detection of Focal Liver Lesions Using Abdominal CT: A Comparison of Image Quality Between Adaptive Statistical Iterative Reconstruction V and Adaptive Statistical Iterative Reconstruction.

    PubMed

    Lee, Sangyun; Kwon, Heejin; Cho, Jihan

    2016-12-01

    To investigate image quality characteristics of abdominal computed tomography (CT) scans reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) vs currently using applied adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved study included 35 consecutive patients who underwent CT of the abdomen. Among these 35 patients, 27 with focal liver lesions underwent abdomen CT with a 128-slice multidetector unit using the following parameters: fixed noise index of 30, 1.25 mm slice thickness, 120 kVp, and a gantry rotation time of 0.5 seconds. CT images were analyzed depending on the method of reconstruction: ASIR (30%, 50%, and 70%) vs ASIR-V (30%, 50%, and 70%). Three radiologists independently assessed randomized images in a blinded manner. Imaging sets were compared to focal lesion detection numbers, overall image quality, and objective noise with a paired sample t test. Interobserver agreement was assessed with the intraclass correlation coefficient. The detection of small focal liver lesions (<10 mm) was significantly higher when ASIR-V was used when compared to ASIR (P <0.001). Subjective image noise, artifact, and objective image noise in liver were generally significantly better for ASIR-V compared to ASIR, especially in 50% ASIR-V. Image sharpness and diagnostic acceptability were significantly worse in 70% ASIR-V compared to various levels of ASIR. Images analyzed using 50% ASIR-V were significantly better than three different series of ASIR or other ASIR-V conditions at providing diagnostically acceptable CT scans without compromising image quality and in the detection of focal liver lesions. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  2. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  3. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Reconstruction of early phase deformations by integrated magnetic and mesotectonic data evaluation

    NASA Astrophysics Data System (ADS)

    Sipos, András A.; Márton, Emő; Fodor, László

    2018-02-01

    Markers of brittle faulting are widely used for recovering past deformation phases. Rocks often have oriented magnetic fabrics, which can be interpreted as connected to ductile deformation before cementation of the sediment. This paper reports a novel statistical procedure for simultaneous evaluation of AMS (Anisotropy of Magnetic Susceptibility) and fault-slip data. The new method analyzes the AMS data, without linearization techniques, so that weak AMS lineation and rotational AMS can be assessed that are beyond the scope of classical methods. This idea is extended to the evaluation of fault-slip data. While the traditional assumptions of stress inversion are not rejected, the method recovers the stress field via statistical hypothesis testing. In addition it provides statistical information needed for the combined evaluation of the AMS and the mesotectonic (0.1 to 10 m) data. In the combined evaluation a statistical test is carried out that helps to decide if the AMS lineation and the mesotectonic markers (in case of repeated deformation of the oldest set of markers) were formed in the same or different deformation phases. If this condition is met, the combined evaluation can improve the precision of the reconstruction. When the two data sets do not have a common solution for the direction of the extension, the deformational origin of the AMS is questionable. In this case the orientation of the stress field responsible for the AMS lineation might be different from that which caused the brittle deformation. Although most of the examples demonstrate the reconstruction of weak deformations in sediments, the new method is readily applicable to investigate the ductile-brittle transition of any rock formation as long as AMS and fault-slip data are available.

  5. Low-dose X-ray CT reconstruction via dictionary learning.

    PubMed

    Xu, Qiong; Yu, Hengyong; Mou, Xuanqin; Zhang, Lei; Hsieh, Jiang; Wang, Ge

    2012-09-01

    Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures.

  6. Low-Dose X-ray CT Reconstruction via Dictionary Learning

    PubMed Central

    Xu, Qiong; Zhang, Lei; Hsieh, Jiang; Wang, Ge

    2013-01-01

    Although diagnostic medical imaging provides enormous benefits in the early detection and accuracy diagnosis of various diseases, there are growing concerns on the potential side effect of radiation induced genetic, cancerous and other diseases. How to reduce radiation dose while maintaining the diagnostic performance is a major challenge in the computed tomography (CT) field. Inspired by the compressive sensing theory, the sparse constraint in terms of total variation (TV) minimization has already led to promising results for low-dose CT reconstruction. Compared to the discrete gradient transform used in the TV method, dictionary learning is proven to be an effective way for sparse representation. On the other hand, it is important to consider the statistical property of projection data in the low-dose CT case. Recently, we have developed a dictionary learning based approach for low-dose X-ray CT. In this paper, we present this method in detail and evaluate it in experiments. In our method, the sparse constraint in terms of a redundant dictionary is incorporated into an objective function in a statistical iterative reconstruction framework. The dictionary can be either predetermined before an image reconstruction task or adaptively defined during the reconstruction process. An alternating minimization scheme is developed to minimize the objective function. Our approach is evaluated with low-dose X-ray projections collected in animal and human CT studies, and the improvement associated with dictionary learning is quantified relative to filtered backprojection and TV-based reconstructions. The results show that the proposed approach might produce better images with lower noise and more detailed structural features in our selected cases. However, there is no proof that this is true for all kinds of structures. PMID:22542666

  7. Texture-preserved penalized weighted least-squares reconstruction of low-dose CT image via image segmentation and high-order MRF modeling

    NASA Astrophysics Data System (ADS)

    Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong

    2016-03-01

    In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.

  8. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  10. Use of latissimus dorsi muscle onlay patch alternative to acellular dermal matrix in implant-based breast reconstruction

    PubMed Central

    Lee, Jeeyeon

    2015-01-01

    Background An acellular dermal matrix (ADM) is applied to release the surrounding muscles and prevent dislocation or rippling of the implant. We compared implant-based breast reconstruction using the latissimus dorsi (LD) muscle, referred to as an “LD muscle onlay patch,” with using an ADM. Method A total of 56 patients (60 breasts) underwent nipple sparing mastectomy with implant-based breast reconstruction using an ADM or LD muscle onlay patch. Cosmetic outcomes were assessed 4 weeks after chemotherapy or radiotherapy, and statistical analyses were performed. Results Mean surgical time and hospital stay were significantly longer in the LD muscle onlay patch group than the ADM group. However, there were no statistically significant differences between groups in postoperative complications. Cosmetic outcomes for breast symmetry and shape were higher in the LD muscle onlay patch group. Conclusions Implant-based breast reconstruction with an LD muscle onlay patch would be a feasible alternative to using an ADM. PMID:26161312

  11. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  12. Anatomical shape analysis of the mandible in Caucasian and Chinese for the production of preformed mandible reconstruction plates.

    PubMed

    Metzger, Marc C; Vogel, Mathias; Hohlweg-Majert, Bettina; Mast, Hansjörg; Fan, Xianqun; Rüdell, Alexandra; Schlager, Stefan

    2011-09-01

    The purpose of this study was to evaluate and analyze statistical shapes of the outer mandible contour of Caucasian and Chinese people, offering data for the production of preformed mandible reconstruction plates. A CT-database of 925 Caucasians (male: n=463, female: n=462) and 960 Chinese (male: n=469, female: n=491) including scans of unaffected mandibles were used and imported into the 3D modeling software Voxim (IVS-Solutions, Chemnitz, Germany). Anatomical landmarks (n=22 points for both sides) were set using the 3D view along the outer contour of the mandible at the area where reconstruction plates are commonly located. We used morphometric methods for statistical shape analysis. We found statistical relevant differences between populations including a distinct discrimination given by the landmarks at the mandible. After generating a metric model this shape information which separated the populations appeared to be of no clinical relevance. The metric size information given by ramus length however provided a profound base for the production of standard reconstruction plates. Clustering by ramus length into three sizes and calculating means of these size-clusters seem to be a good solution for constructing preformed reconstruction plates that will fit a vast majority. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.

    PubMed

    Jiang, Z; Chen, W; Burkhart, C

    2013-11-01

    Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  14. Dynamic re-weighted total variation technique and statistic Iterative reconstruction method for x-ray CT metal artifact reduction

    NASA Astrophysics Data System (ADS)

    Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming

    2017-07-01

    Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.

  15. A defocus-information-free autostereoscopic three-dimensional (3D) digital reconstruction method using direct extraction of disparity information (DEDI)

    NASA Astrophysics Data System (ADS)

    Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu

    2016-10-01

    Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.

  16. Ultralow-dose CT of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction and model-based iterative reconstruction: 2D and 3D image quality.

    PubMed

    Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang

    2015-03-01

    OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.

  17. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile

    The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less

  18. Accelerated Compressed Sensing Based CT Image Reconstruction.

    PubMed

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  19. Accelerated Compressed Sensing Based CT Image Reconstruction

    PubMed Central

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200

  20. BlastNeuron for Automated Comparison, Retrieval and Clustering of 3D Neuron Morphologies.

    PubMed

    Wan, Yinan; Long, Fuhui; Qu, Lei; Xiao, Hang; Hawrylycz, Michael; Myers, Eugene W; Peng, Hanchuan

    2015-10-01

    Characterizing the identity and types of neurons in the brain, as well as their associated function, requires a means of quantifying and comparing 3D neuron morphology. Presently, neuron comparison methods are based on statistics from neuronal morphology such as size and number of branches, which are not fully suitable for detecting local similarities and differences in the detailed structure. We developed BlastNeuron to compare neurons in terms of their global appearance, detailed arborization patterns, and topological similarity. BlastNeuron first compares and clusters 3D neuron reconstructions based on global morphology features and moment invariants, independent of their orientations, sizes, level of reconstruction and other variations. Subsequently, BlastNeuron performs local alignment between any pair of retrieved neurons via a tree-topology driven dynamic programming method. A 3D correspondence map can thus be generated at the resolution of single reconstruction nodes. We applied BlastNeuron to three datasets: (1) 10,000+ neuron reconstructions from a public morphology database, (2) 681 newly and manually reconstructed neurons, and (3) neurons reconstructions produced using several independent reconstruction methods. Our approach was able to accurately and efficiently retrieve morphologically and functionally similar neuron structures from large morphology database, identify the local common structures, and find clusters of neurons that share similarities in both morphology and molecular profiles.

  1. Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.

    PubMed

    Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias

    2016-12-01

    Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.

  2. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  3. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  4. Reconstructing metastatic seeding patterns of human cancers

    PubMed Central

    Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.

    2017-01-01

    Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641

  5. [Impact to Z-score Mapping of Hyperacute Stroke Images by Computed Tomography in Adaptive Statistical Iterative Reconstruction].

    PubMed

    Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro

    The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.

  6. Denoised ordered subset statistically penalized algebraic reconstruction technique (DOS-SPART) in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Garrett, John; Li, Yinsheng; Li, Ke; Chen, Guang-Hong

    2017-03-01

    Digital breast tomosynthesis (DBT) is a three dimensional (3D) breast imaging modality in which projections are acquired over a limited angular span around the compressed breast and reconstructed into image slices parallel to the detector. DBT has been shown to help alleviate the breast tissue overlapping issues of two dimensional (2D) mammography. Since the overlapping tissues may simulate cancer masses or obscure true cancers, this improvement is critically important for improved breast cancer screening and diagnosis. In this work, a model-based image reconstruction method is presented to show that spatial resolution in DBT volumes can be maintained while dose is reduced using the presented method when compared to that of a state-of-the-art commercial reconstruction technique. Spatial resolution was measured in phantom images and subjectively in a clinical dataset. Noise characteristics were explored in a cadaver study. In both the quantitative and subjective results the image sharpness was maintained and overall image quality was maintained at reduced doses when the model-based iterative reconstruction was used to reconstruct the volumes.

  7. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Clinical Evaluation of Papilla Reconstruction Using Subepithelial Connective Tissue Graft

    PubMed Central

    Kaushik, Alka; PK, Pal; Chopra, Deepak; Chaurasia, Vishwajit Rampratap; Masamatti, Vinaykumar S; DK, Suresh; Babaji, Prashant

    2014-01-01

    Objective: The aesthetics of the patient can be improved by surgical reconstruction of interdental papilla by using an advanced papillary flap interposed with subepithelial connective tissue graft. Materials and Methods: A total of fifteen sites from ten patients having black triangles/papilla recession in the maxillary anterior region were selected and subjected to presurgical evaluation. The sites were treated with interposed subepithelial connective tissue graft placed under a coronally advance flap. The integrity of the papilla was maintained by moving the whole of gingivopapillary unit coronally. The various parameters were analysed at different intervals. Results: There was a mean decrease in the papilla presence index score and distance from contact point to gingival margin, but it was statistically not significant. Also, there is increase in the width of the keratinized gingiva which was statistically highly significant. Conclusion: Advanced papillary flap with interposed sub–epithelial connective tissue graft can offer predictable results for the reconstruction of interdental papilla. If papilla loss occurs solely due to soft-tissue damage, reconstructive techniques can completely restore it; but if due to periodontal disease involving bone loss, reconstruction is generally incomplete and multiple surgical procedures may be required. PMID:25386529

  9. Comparison of Statistical Estimation Techniques for Mars Entry, Descent, and Landing Reconstruction from MEDLI-like Data Sources

    NASA Technical Reports Server (NTRS)

    Dutta, Soumyo; Braun, Robert D.; Russell, Ryan P.; Clark, Ian G.; Striepe, Scott A.

    2012-01-01

    Flight data from an entry, descent, and landing (EDL) sequence can be used to reconstruct the vehicle's trajectory, aerodynamic coefficients and the atmospheric profile experienced by the vehicle. Past Mars missions have contained instruments that do not provide direct measurement of the freestream atmospheric conditions. Thus, the uncertainties in the atmospheric reconstruction and the aerodynamic database knowledge could not be separated. The upcoming Mars Science Laboratory (MSL) will take measurements of the pressure distribution on the aeroshell forebody during entry and will allow freestream atmospheric conditions to be partially observable. This data provides a mean to separate atmospheric and aerodynamic uncertainties and is part of the MSL EDL Instrumentation (MEDLI) project. Methods to estimate the flight performance statistically using on-board measurements are demonstrated here through the use of simulated Mars data. Different statistical estimators are used to demonstrate which estimator best quantifies the uncertainties in the flight parameters. The techniques demonstrated herein are planned for application to the MSL flight dataset after the spacecraft lands on Mars in August 2012.

  10. ELUCID - Exploring the Local Universe with ReConstructed Initial Density Field III: Constrained Simulation in the SDSS Volume

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; Zhang, Youcai; Shi, JingJing; Jing, Y. P.; Liu, Chengze; Li, Shijie; Kang, Xi; Gao, Yang

    2016-11-01

    A method we developed recently for the reconstruction of the initial density field in the nearby universe is applied to the Sloan Digital Sky Survey Data Release 7. A high-resolution N-body constrained simulation (CS) of the reconstructed initial conditions, with 30723 particles evolved in a 500 {h}-1 {Mpc} box, is carried out and analyzed in terms of the statistical properties of the final density field and its relation with the distribution of Sloan Digital Sky Survey galaxies. We find that the statistical properties of the cosmic web and the halo populations are accurately reproduced in the CS. The galaxy density field is strongly correlated with the CS density field, with a bias that depends on both galaxy luminosity and color. Our further investigations show that the CS provides robust quantities describing the environments within which the observed galaxies and galaxy systems reside. Cosmic variance is greatly reduced in the CS so that the statistical uncertainties can be controlled effectively, even for samples of small volumes.

  11. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  12. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  13. Reconstruction of ensembles of coupled time-delay systems from time series.

    PubMed

    Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P

    2014-06-01

    We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.

  14. Entry trajectory and atmosphere reconstruction methodologies for the Mars Exploration Rover mission

    NASA Astrophysics Data System (ADS)

    Desai, Prasun N.; Blanchard, Robert C.; Powell, Richard W.

    2004-02-01

    The Mars Exploration Rover (MER) mission will land two landers on the surface of Mars, arriving in January 2004. Both landers will deliver the rovers to the surface by decelerating with the aid of an aeroshell, a supersonic parachute, retro-rockets, and air bags for safely landing on the surface. The reconstruction of the MER descent trajectory and atmosphere profile will be performed for all the phases from hypersonic flight through landing. A description of multiple methodologies for the flight reconstruction is presented from simple parameter identification methods through a statistical Kalman filter approach.

  15. Reconstruction of three-dimensional porous media using generative adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Mosser, Lukas; Dubrule, Olivier; Blunt, Martin J.

    2017-10-01

    To evaluate the variability of multiphase flow properties of porous media at the pore scale, it is necessary to acquire a number of representative samples of the void-solid structure. While modern x-ray computer tomography has made it possible to extract three-dimensional images of the pore space, assessment of the variability in the inherent material properties is often experimentally not feasible. We present a method to reconstruct the solid-void structure of porous media by applying a generative neural network that allows an implicit description of the probability distribution represented by three-dimensional image data sets. We show, by using an adversarial learning approach for neural networks, that this method of unsupervised learning is able to generate representative samples of porous media that honor their statistics. We successfully compare measures of pore morphology, such as the Euler characteristic, two-point statistics, and directional single-phase permeability of synthetic realizations with the calculated properties of a bead pack, Berea sandstone, and Ketton limestone. Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. The fully convolutional nature of the trained neural network allows the generation of large samples while maintaining computational efficiency. Compared to classical stochastic methods of image reconstruction, the implicit representation of the learned data distribution can be stored and reused to generate multiple realizations of the pore structure very rapidly.

  16. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    PubMed

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P < 0.0001) for the ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P < 0.0001) for ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  17. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    NASA Astrophysics Data System (ADS)

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  18. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  19. PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.

    2016-02-01

    Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.

  20. Stochastic Analysis and Design of Heterogeneous Microstructural Materials System

    NASA Astrophysics Data System (ADS)

    Xu, Hongyi

    Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the dimension of microstructure design space. This dissertation develops a machine learning-based methodology to identify the key microstructure descriptors that highly impact properties of interest. In uncertainty quantification, a comparative study on data-driven random process models is conducted to provide guidance for choosing the most accurate model in statistical uncertainty quantification. Two new goodness-of-fit metrics are developed to provide quantitative measurements of random process models' accuracy. The benefits of the proposed methods are demonstrated by the example of designing the microstructure of polymer nanocomposites. This dissertation provides material-generic, intelligent modeling/design methodologies and techniques to accelerate the process of analyzing and designing new microstructural materials system.

  1. Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Wang, Beilei; Barner, Kenneth; Lee, Denny

    2005-04-01

    Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.

  2. Ultralow dose computed tomography attenuation correction for pediatric PET CT using adaptive statistical iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, Samuel L., E-mail: samuel.brady@stjude.org; Shulkin, Barry L.

    2015-02-15

    Purpose: To develop ultralow dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultralow doses (10–35 mA s). CT quantitation: noise, low-contrast resolution, and CT numbers for 11 tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% volume computed tomography dose index (0.39/3.64; mGy) from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET imagesmore » were reconstructed with the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUV{sub bw}) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative dose reduction and noise control. Results: CT numbers were constant to within 10% from the nondose reduced CTAC image for 90% dose reduction. No change in SUV{sub bw}, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols was found down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62% and 86% (3.2/8.3–0.9/6.2). Noise magnitude in dose-reduced patient images increased but was not statistically different from predose-reduced patient images. Conclusions: Using ASiR allowed for aggressive reduction in CT dose with no change in PET reconstructed images while maintaining sufficient image quality for colocalization of hybrid CT anatomy and PET radioisotope uptake.« less

  3. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  4. A method to reconstruct long precipitation series using systematic descriptive observations in weather diaries: the example of the precipitation series for Bern, Switzerland (1760-2003)

    NASA Astrophysics Data System (ADS)

    Gimmi, U.; Luterbacher, J.; Pfister, C.; Wanner, H.

    2007-01-01

    In contrast to barometric and thermometric records, early instrumental precipitation series are quite rare. Based on systematic descriptive daily records, a quantitative monthly precipitation series for Bern (Switzerland) was reconstructed back to the year 1760 (reconstruction based on documentary evidence). Since every observer had his own personal style to fill out his diary, the main focus was to avoid observer-specific bias in the reconstruction. An independent statistical monthly precipitation reconstruction was performed using instrumental data from European sites. Over most periods the reconstruction based on documentary evidence lies inside the 2 standard errors of the statistical estimates. The comparison between these two approaches enables an independent verification and a reliable error estimate. The analysis points to below normal rainfall totals in all seasons during the late 18th century and in the 1820s and 1830s. Increased precipitation occurred in the early 1850s and the late 1870s, particularly from spring to autumn. The annual precipitation totals generally tend to be higher in the 20th century than in the late 18th and 19th century. Precipitation changes are discussed in the context of socioeconomic impacts and Alpine glacier dynamics. The conceptual design of the reconstruction procedure is aimed at application for similar descriptive precipitation series, which are known to be abundant from the mid-18th century in Europe and the U.S.

  5. Isokinetic Testing in Evaluation Rehabilitation Outcome After ACL Reconstruction

    PubMed Central

    Cvjetkovic, Dragana Dragicevic; Bijeljac, Sinisa; Palija, Stanislav; Talic, Goran; Radulovic, Tatjana Nozica; Kosanovic, Milkica Glogovac; Manojlovic, Slavko

    2015-01-01

    Introduction: Numerous rehab protocols have been used in rehabilitation after ACL reconstruction. Isokinetic testing is an objective way to evaluate dynamic stability of the knee joint that estimates the quality of rehabilitation outcome after ACL reconstruction. Our investigation goal was to show importance of isokinetic testing in evaluation thigh muscle strength in patients which underwent ACL reconstruction and rehabilitation protocol. Subjects and methods: In prospective study, we evaluated 40 subjects which were divided into two groups. Experimental group consisted of 20 recreational males which underwent ACL reconstruction with hamstring tendon and rehabilitation protocol 6 months before isokinetic testing. Control group (20 subjects) consisted of healthy recreational males. In all subjects knee muscle testing was performed on a Biodex System 4 Pro isokinetic dynamo-meter et velocities of 60°/s and 180°/s. We followed average peak torque to body weight (PT/BW) and classic H/Q ratio. In statistical analysis Student’s T test was used. Results: There were statistically significant differences between groups in all evaluated parameters except of the mean value of PT/BW of the quadriceps et velocity of 60°/s (p>0.05). Conclusion: Isokinetic testing of dynamic stabilizers of the knee is need in diagnostic and treatment thigh muscle imbalance. We believe that isokinetic testing is an objective parameter for return to sport activities after ACL reconstruction. PMID:25870471

  6. Track and vertex reconstruction: From classical to adaptive methods

    NASA Astrophysics Data System (ADS)

    Strandlie, Are; Frühwirth, Rudolf

    2010-04-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  7. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  8. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.

    PubMed

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.

  9. Graph reconstruction using covariance-based methods.

    PubMed

    Sulaimanov, Nurgazy; Koeppl, Heinz

    2016-12-01

    Methods based on correlation and partial correlation are today employed in the reconstruction of a statistical interaction graph from high-throughput omics data. These dedicated methods work well even for the case when the number of variables exceeds the number of samples. In this study, we investigate how the graphs extracted from covariance and concentration matrix estimates are related by using Neumann series and transitive closure and through discussing concrete small examples. Considering the ideal case where the true graph is available, we also compare correlation and partial correlation methods for large realistic graphs. In particular, we perform the comparisons with optimally selected parameters based on the true underlying graph and with data-driven approaches where the parameters are directly estimated from the data.

  10. High-resolution image reconstruction technique applied to the optical testing of ground-based astronomical telescopes

    NASA Astrophysics Data System (ADS)

    Jin, Zhenyu; Lin, Jing; Liu, Zhong

    2008-07-01

    By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.

  11. Method and system for efficient video compression with low-complexity encoder

    NASA Technical Reports Server (NTRS)

    Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)

    2012-01-01

    Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.

  12. Efficacy of micronized acellular dermal graft for use in interproximal papillae regeneration.

    PubMed

    Geurs, Nico C; Romanos, Alain H; Vassilopoulos, Philip J; Reddy, Michael S

    2012-02-01

    The aim of this study was to evaluate interdental papillary reconstruction based on a micronized acellular dermal matrix allograft technique. Thirty-eight papillae in 12 patients with esthetic complaints of insufficient papillae were evaluated. Decreased gingival recession values were found postoperatively (P < .001). Chi-square analysis showed significantly higher postoperative Papilla Index values (chi-square = 43, P < .001), further supported by positive symmetry statistical analysis values (positive kappa and weighted kappa values). This procedure shows promise as a method for papillary reconstruction.

  13. A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi

    2016-09-01

    Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).

  14. A statistical-based approach for acoustic tomography of the atmosphere.

    PubMed

    Kolouri, Soheil; Azimi-Sadjadi, Mahmood R; Ziemann, Astrid

    2014-01-01

    Acoustic travel-time tomography of the atmosphere is a nonlinear inverse problem which attempts to reconstruct temperature and wind velocity fields in the atmospheric surface layer using the dependence of sound speed on temperature and wind velocity fields along the propagation path. This paper presents a statistical-based acoustic travel-time tomography algorithm based on dual state-parameter unscented Kalman filter (UKF) which is capable of reconstructing and tracking, in time, temperature, and wind velocity fields (state variables) as well as the dynamic model parameters within a specified investigation area. An adaptive 3-D spatial-temporal autoregressive model is used to capture the state evolution in the UKF. The observations used in the dual state-parameter UKF process consist of the acoustic time of arrivals measured for every pair of transmitter/receiver nodes deployed in the investigation area. The proposed method is then applied to the data set collected at the Meteorological Observatory Lindenberg, Germany, as part of the STINHO experiment, and the reconstruction results are presented.

  15. Task-based statistical image reconstruction for high-quality cone-beam CT

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.

  16. Dictionary-learning-based reconstruction method for electron tomography.

    PubMed

    Liu, Baodong; Yu, Hengyong; Verbridge, Scott S; Sun, Lizhi; Wang, Ge

    2014-01-01

    Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.

  17. Quasisecular cyclicity in the climate of the Earth's Northern Hemisphere and its possible relation to solar activity variations

    NASA Astrophysics Data System (ADS)

    Ogurtsov, M. G.; Jungner, H.; Lindholm, M.; Helama, S.; Dergachev, V. A.

    2009-12-01

    Paleoclimatological reconstructions of temperature of the Earth’s Northern Hemisphere for the last thousand years have been studied using the up-to-date methods of statistical analysis. It has bee indicated that the quasisecular (a period of 60-130 years) cyclicity, which is observed in the climate of the Earth’s Northern Hemisphere, has a bimodal structure, i.e., being composed of the 60-85 and 85-130 year periodicities. The possible relation of the quasisecular climatic rhythm to the corresponding Gleissberg solar cycle has been studied using the solar activity reconstructions performed with the help of the solar paleoastrophysics methods.

  18. Reconstructing White Walls: Multi-View Multi-Shot 3d Reconstruction of Textureless Surfaces

    NASA Astrophysics Data System (ADS)

    Ley, Andreas; Hänsch, Ronny; Hellwich, Olaf

    2016-06-01

    The reconstruction of the 3D geometry of a scene based on image sequences has been a very active field of research for decades. Nevertheless, there are still existing challenges in particular for homogeneous parts of objects. This paper proposes a solution to enhance the 3D reconstruction of weakly-textured surfaces by using standard cameras as well as a standard multi-view stereo pipeline. The underlying idea of the proposed method is based on improving the signal-to-noise ratio in weakly-textured regions while adaptively amplifying the local contrast to make better use of the limited numerical range in 8-bit images. Based on this premise, multiple shots per viewpoint are used to suppress statistically uncorrelated noise and enhance low-contrast texture. By only changing the image acquisition and adding a preprocessing step, a tremendous increase of up to 300% in completeness of the 3D reconstruction is achieved.

  19. Historical Phenological Observations: Past Climate Impact Analyses and Climate Reconstructions

    NASA Astrophysics Data System (ADS)

    Rutishauser, T.; Luterbacher, J.; Meier, N.; Jeanneret, F.; Pfister, C.; Wanner, H.

    2007-12-01

    Plant phenological observations have been found an important indicator of climate change impacts on seasonal and interannual vegetation development for the late 20th/early 21st century. Our contribution contains three parts that are essential for the understanding (part 1), the analysis (part 2) and the application (part 3) of historical phenological observations in global change research. First, we propose a definition for historical phenonolgy (Rutishauser, 2007). We shortly portray the first appearance of phenological observations in Medieval philosophical and literature sources, the usage and application of this method in the Age of Enlightenment (Carl von Linné, Charles Morren), as well as the development in the 20th century (Schnelle, Lieth) to present-day networks (COST725, USA-NPN) Second, we introduce a methodological approach to estimate 'Statistical plants' from historical phenological observations (Rutishauser et al., JGR-Biogeoscience, in press). We combine spatial averaging methods and regression transfer modeling to estimate 'statistical plant' dates from historical observations that often contain gaps, changing observers and changing locations. We apply the concept to reconstruct a statistical 'Spring plant' as the weighted mean of the flowering date of cherry and apple tree and beech budburst of Switzerland 1702- 2005. Including dating total data uncertainty we estimate 10 at interannual and 3.4 days at decadal time scales. Third, we apply two long-term phenological records to describe plant phenological response to spring temperature and reconstruct warm-season temperatures from grape harvest dates (Rutishauser et al, submitted; Meier et al, GRL, in press).

  20. Synchrotron radiation μCT and histology evaluation of bone-to-implant contact.

    PubMed

    Neldam, Camilla Albeck; Sporring, Jon; Rack, Alexander; Lauridsen, Torsten; Hauge, Ellen-Margrethe; Jørgensen, Henrik L; Jørgensen, Niklas Rye; Feidenhansl, Robert; Pinholt, Else Marie

    2017-09-01

    The purpose of this study was to evaluate bone-to-implant contact (BIC) in two-dimensional (2D) histology compared to high-resolution three-dimensional (3D) synchrotron radiation micro computed tomography (SR micro-CT). High spatial resolution, excellent signal-to-noise ratio, and contrast establish SR micro-CT as the leading imaging modality for hard X-ray microtomography. Using SR micro-CT at voxel size 5 μm in an experimental goat mandible model, no statistically significant difference was found between the different treatment modalities nor between recipient and reconstructed bone. The histological evaluation showed a statistically significant difference between BIC in reconstructed and recipient bone (p < 0.0001). Further, no statistically significant difference was found between the different treatment modalities which we found was due to large variation and subsequently due to low power. Comparing histology and SR micro-CT evaluation a bias of 5.2% was found in reconstructed area, and 15.3% in recipient bone. We conclude that for evaluation of BIC with histology and SR micro-CT, SR micro-CT cannot be proven more precise than histology for evaluation of BIC, however, with this SR micro-CT method, one histologic bone section is comparable to the 3D evaluation. Further, the two methods complement each other with knowledge on BIC in 2D and 3D. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  1. Radio measurements of the energy and the depth of the shower maximum of cosmic-ray air showers by Tunka-Rex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezyazeekov, P.A.; Budnev, N.M.; Gress, O.A.

    2016-01-01

    We reconstructed the energy and the position of the shower maximum of air showers with energies E ∼> 100 PeV applying a method using radio measurements performed with Tunka-Rex. An event-to-event comparison to air-Cherenkov measurements of the same air showers with the Tunka-133 photomultiplier array confirms that the radio reconstruction works reliably. The Tunka-Rex reconstruction methods and absolute scales have been tuned on CoREAS simulations and yield energy and X{sub max} values consistent with the Tunka-133 measurements. The results of two independent measurement seasons agree within statistical uncertainties, which gives additional confidence in the radio reconstruction. The energy precision of Tunka-Rex is comparablemore » to the Tunka-133 precision of 15%, and exhibits a 20% uncertainty on the absolute scale dominated by the amplitude calibration of the antennas. For X{sub max}, this is the first direct experimental correlation of radio measurements with a different, established method. At the moment, the X{sub max} resolution of Tunka-Rex is approximately 40 g/cm{sup 2}. This resolution can probably be improved by deploying additional antennas and by further development of the reconstruction methods, since the present analysis does not yet reveal any principle limitations.« less

  2. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    PubMed

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  3. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method

    PubMed Central

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929

  4. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    PubMed

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of iterative reconstruction. Use of the ASIR-V algorithm decreased image noise and increased image quality when compared with the ASIR and FBP methods. These results suggest that high-quality low-dose CT may represent a new clinical option.

  5. MO-DE-207A-09: Low-Dose CT Image Reconstruction Via Learning From Different Patient Normal-Dose Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, H; Xing, L; Liang, Z

    Purpose: To investigate a novel low-dose CT (LdCT) image reconstruction strategy for lung CT imaging in radiation therapy. Methods: The proposed approach consists of four steps: (1) use the traditional filtered back-projection (FBP) method to reconstruct the LdCT image; (2) calculate structure similarity (SSIM) index between the FBP-reconstructed LdCT image and a set of normal-dose CT (NdCT) images, and select the NdCT image with the highest SSIM as the learning source; (3) segment the NdCT source image into lung and outside tissue regions via simple thresholding, and adopt multiple linear regression to learn high-order Markov random field (MRF) pattern formore » each tissue region in the NdCT source image; (4) segment the FBP-reconstructed LdCT image into lung and outside regions as well, and apply the learnt MRF prior in each tissue region for statistical iterative reconstruction of the LdCT image following the penalized weighted least squares (PWLS) framework. Quantitative evaluation of the reconstructed images was based on the signal-to-noise ratio (SNR), local binary pattern (LBP) and histogram of oriented gradients (HOG) metrics. Results: It was observed that lung and outside tissue regions have different MRF patterns predicted from the NdCT. Visual inspection showed that our method obviously outperformed the traditional FBP method. Comparing with the region-smoothing PWLS method, our method has, in average, 13% increase in SNR, 15% decrease in LBP difference, and 12% decrease in HOG difference from reference standard for all regions of interest, which indicated the superior performance of the proposed method in terms of image resolution and texture preservation. Conclusion: We proposed a novel LdCT image reconstruction method by learning similar image characteristics from a set of NdCT images, and the to-be-learnt NdCT image does not need to be scans from the same subject. This approach is particularly important for enhancing image quality in radiation therapy.« less

  6. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  7. Reporting clinical outcomes of breast reconstruction: a systematic review.

    PubMed

    Potter, S; Brigic, A; Whiting, P F; Cawthorn, S J; Avery, K N L; Donovan, J L; Blazeby, J M

    2011-01-05

    Breast reconstruction after mastectomy for cancer requires accurate evaluation to inform evidence-based participatory decision making, but the standards of outcome reporting after breast reconstruction have not previously been considered. We used extensive searches to identify articles reporting surgical outcomes of breast reconstruction. We extracted data using published criteria for complication reporting modified to reflect reconstructive practice. Study designs included randomized controlled trials, cohort studies, and case series. The Cochrane Risk of Bias tool was used to critically appraise all study designs. Other criteria used to assess the studies were selection and funding bias, statistical power calculations, and institutional review board approval. Wilcoxon signed rank tests were used to compare the breadth and frequency of study outcomes, and χ² tests were used to compare the number of studies in each group reporting each of the published criteria. All statistical tests were two-sided. Surgical complications following breast reconstruction in 42,146 women were evaluated in 134 studies. These included 11 (8.2%) randomized trials, 74 (55.2%) cohort studies, and 49 (36.6%) case series. Fifty-three percent of studies demonstrated a disparity between methods and results in the numbers of complications reported. Complications were defined by 87 (64.9%) studies and graded by 78 (58.2%). Details such as the duration of follow-up and risk factors for adverse outcomes were omitted from 47 (35.1%) and 58 (43.3%) studies, respectively. Overall, the studies defined fewer than 20% of the complications they reported, and the definitions were largely inconsistent. The results of this systematic review suggest that outcome reporting in breast reconstruction is inconsistent and lacks methodological rigor. The development of a standardized core outcome set is recommended to improve outcome reporting in breast reconstruction.

  8. Reconstruction of daily solar UV irradiation from 1893 to 2002 in Potsdam, Germany

    NASA Astrophysics Data System (ADS)

    Junk, Jürgen; Feister, Uwe; Helbig, Alfred

    2007-08-01

    Long-term records of solar UV radiation reaching the Earth’s surface are scarce. Radiative transfer calculations and statistical models are two options used to reconstruct decadal changes in solar UV radiation from long-term records of measured atmospheric parameters that contain information on the effect of clouds, atmospheric aerosols and ground albedo on UV radiation. Based on earlier studies, where the long-term variation of daily solar UV irradiation was derived from measured global and diffuse irradiation as well as atmospheric ozone by a non-linear regression method [Feister et al. (2002) Photochem Photobiol 76:281 293], we present another approach for the reconstruction of time series of solar UV radiation. An artificial neural network (ANN) was trained with measurements of solar UV irradiation taken at the Meteorological Observatory in Potsdam, Germany, as well as measured parameters with long-term records such as global and diffuse radiation, sunshine duration, horizontal visibility and column ozone. This study is focussed on the reconstruction of daily broad-band UV-B (280 315 nm), UV-A (315 400 nm) and erythemal UV irradiation (ER). Due to the rapid changes in cloudiness at mid-latitude sites, solar UV irradiance exhibits appreciable short-term variability. One of the main advantages of the statistical method is that it uses doses of highly variable input parameters calculated from individual spot measurements taken at short time intervals, which thus do represent the short-term variability of solar irradiance.

  9. Reduced Radiation Dose with Model-based Iterative Reconstruction versus Standard Dose with Adaptive Statistical Iterative Reconstruction in Abdominal CT for Diagnosis of Acute Renal Colic.

    PubMed

    Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis

    2015-07-01

    To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.

  10. Joint amalgamation of most parsimonious reconciled gene trees

    PubMed Central

    Scornavacca, Celine; Jacox, Edwin; Szöllősi, Gergely J.

    2015-01-01

    Motivation: Traditionally, gene phylogenies have been reconstructed solely on the basis of molecular sequences; this, however, often does not provide enough information to distinguish between statistically equivalent relationships. To address this problem, several recent methods have incorporated information on the species phylogeny in gene tree reconstruction, leading to dramatic improvements in accuracy. Although probabilistic methods are able to estimate all model parameters but are computationally expensive, parsimony methods—generally computationally more efficient—require a prior estimate of parameters and of the statistical support. Results: Here, we present the Tree Estimation using Reconciliation (TERA) algorithm, a parsimony based, species tree aware method for gene tree reconstruction based on a scoring scheme combining duplication, transfer and loss costs with an estimate of the sequence likelihood. TERA explores all reconciled gene trees that can be amalgamated from a sample of gene trees. Using a large scale simulated dataset, we demonstrate that TERA achieves the same accuracy as the corresponding probabilistic method while being faster, and outperforms other parsimony-based methods in both accuracy and speed. Running TERA on a set of 1099 homologous gene families from complete cyanobacterial genomes, we find that incorporating knowledge of the species tree results in a two thirds reduction in the number of apparent transfer events. Availability and implementation: The algorithm is implemented in our program TERA, which is freely available from http://mbb.univ-montp2.fr/MBB/download_sources/16__TERA. Contact: celine.scornavacca@univ-montp2.fr, ssolo@angel.elte.hu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25380957

  11. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  12. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  13. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  14. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: On the possibility of studying the temporal evolution of a surface relief directly during exposure to high-power radiation

    NASA Astrophysics Data System (ADS)

    Abramov, D. V.; Arakelyan, S. M.; Galkin, A. F.; Klimovskii, Ivan I.; Kucherik, A. O.; Prokoshev, V. G.

    2006-06-01

    The video image of the graphite surface exposed to focused laser radiation is obtained with the help of a laser monitor. A bright ring moving over the heated surface was observed. A method for reconstructing the surface relief from the video image is proposed and realised. The method is based on the measurement of the angular distribution of the light intensity scattered by the graphite sample surface. The surface relief of the graphite sample changing in time is reconstructed. The relative change in the relief height during laser excitation is measured. The statistical characteristics of the reconstructed graphite surface shape and their variation during laser irradiation are studied. It is found that a circular convexity appears within the bright ring. The formation mechanism of this convexity requires further investigations.

  15. Transtibial vs anatomical single bundle technique for anterior cruciate ligament reconstruction: A Retrospective Cohort Study.

    PubMed

    Kilinc, Bekir Eray; Kara, Adnan; Oc, Yunus; Celik, Haluk; Camur, Savas; Bilgin, Emre; Erten, Yunus Turgay; Sahinkaya, Turker; Eren, Osman Tugrul

    2016-05-01

    Most of the ACL reconstruction is done with isometric single-bundle technique. Traditionally, surgeons were trained to use the transtibial technique (TT) for drilling the femoral tunnel. Our study compared the early postoperative period functional and clinical outcomes of patients who had ACL reconstruction with TT and patients who had ACL reconstruction with anatomical single-bundle technique (AT). Fifty-five patients who had ACL reconstruction and adequate follow-up between January 2010-December 2013 were included the study. Patients were grouped by their surgery technique. 28 patients included into anatomical single-bundle ACL reconstruction surgery group (group 1) and 27 patients were included into transtibial AC reconstruction group (group 2). Average age of patients in group 1 and group 2 was 28.3 ± 6, and 27.9 ± 6.4, respectively. Lachman and Pivot-shift tests were performed to patients. Laxity was measured by KT-1000 arthrometer test with 15, 20 and 30 pound power. All patients' muscle strength between both extremities were evaluated with Cybex II (Humac) at 60°/sec, 240°/sec frequencies with flexion and extension peak torque. The maximum force values of non-operated knee and the operated knee were compared to each other. Groups were evaluated by using International Knee Documentation Committee (IKDC) knee ligament healing Standard form, IKDC activity scale, modified Lysholm and Cincinnati evaluation forms. Return to work and exercise time of patients were compared. Functional and clinical outcomes of two groups were compared. NCSS 2007 and PASS 2008 Statistical Software programs were used for statistical analysis. There was no statistically significant difference between Lachman and Pivot-shift results (p > 0.01). Positive value of Pivot-shift test and incidence of anterior translation in Lachman test were higher in the patients who had TT. Lysholm activity level of patients who had TT, 33.3% (n = 9) were excellent, 51.9% (n = 14) were good and 14.8% (n = 4) were moderate; patients who had AT, 57.1% (n = 16) were excellent, 39.3% (n = 11) were good and 3.6% (n = 1) was good level. There was no statistically significant difference between Lysholm Activity level of the patients (p < 0.01). Lysholm Activity level of patients who had AT significantly higher than TT. There was no statistically significant difference between Modified Cincinnati activity level of the patients (p < 0.05). Modified Cincinnati activity level of patients who had AT were significantly higher than those had TT. There was no statistically significant difference between two groups with post treatment IKDC activity level (p < 0.01). Intense activity after treatment rate of patient who had AT was significantly higher than those had TT. There was statistically significant difference between Cybex extension-flexion 60 measurement and extension 240 measurement of the patients (p < 0.01). KT-1000 arthrometer test results with AT was better than the TT in antero-posterior translation of the knee kinematics at 20 and 30 pound of forces. Return to exercise time of patients who had TT was significantly higher than those had AT (p < 0.01). There was no statistically significant difference between return to work time of patients (p > 0.05). Single-bundle anatomic ACL reconstruction was better than the TT in term of clinical, functional, and laboratory results. We believe that AT ACL reconstruction will increase in use and traditional method which is TT ACL reconstruction surgery will decrease in the long term. Theoretically, anatomic relocation of the ACL can provide better knee kinematics. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Constructing a cosmological model-independent Hubble diagram of type Ia supernovae with cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.

    2016-02-01

    We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.

  17. A phantom-based JAFROC observer study of two CT reconstruction methods: the search for optimisation of lesion detection and effective dose

    NASA Astrophysics Data System (ADS)

    Thompson, John D.; Chakraborty, Dev P.; Szczepura, Katy; Vamvakas, Ioannis; Tootell, Andrew; Manning, David J.; Hogg, Peter

    2015-03-01

    Purpose: To investigate the dose saving potential of iterative reconstruction (IR) in a computed tomography (CT) examination of the thorax. Materials and Methods: An anthropomorphic chest phantom containing various configurations of simulated lesions (5, 8, 10 and 12mm; +100, -630 and -800 Hounsfield Units, HU) was imaged on a modern CT system over a tube current range (20, 40, 60 and 80mA). Images were reconstructed with (IR) and filtered back projection (FBP). An ATOM 701D (CIRS, Norfolk, VA) dosimetry phantom was used to measure organ dose. Effective dose was calculated. Eleven observers (15.11+/-8.75 years of experience) completed a free response study, localizing lesions in 544 single CT image slices. A modified jackknife alternative free-response receiver operating characteristic (JAFROC) analysis was completed to look for a significant effect of two factors: reconstruction method and tube current. Alpha was set at 0.05 to control the Type I error in this study. Results: For modified JAFROC analysis of reconstruction method there was no statistically significant difference in lesion detection performance between FBP and IR when figures-of-merit were averaged over tube current (F(1,10)=0.08, p = 0.789). For tube current analysis, significant differences were revealed between multiple pairs of tube current settings (F(3,10) = 16.96, p<0.001) when averaged over image reconstruction method. Conclusion: The free-response study suggests that lesion detection can be optimized at 40mA in this phantom model, a measured effective dose of 0.97mSv. In high-contrast regions the diagnostic value of IR, compared to FBP, is less clear.

  18. Reconstructing Spectral Scenes Using Statistical Estimation to Enhance Space Situational Awareness

    DTIC Science & Technology

    2006-12-01

    simultane- ously spatially and spectrally deblur the images collected from ASIS. The algorithms are based on proven estimation theories and do not...collected with any system using a filtering technology known as Electronic Tunable Filters (ETFs). Previous methods to deblur spectral images collected...spectrally deblurring then the previously investigated methods. This algorithm expands on a method used for increasing the spectral resolution in gamma-ray

  19. SCENERY: a web application for (causal) network reconstruction from cytometry data

    PubMed Central

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper

    2017-01-01

    Abstract Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. PMID:28525568

  20. A comparison of linear interpolation models for iterative CT reconstruction.

    PubMed

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.

  1. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    PubMed

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brady, S; Shulkin, B

    Purpose: To develop ultra-low dose computed tomography (CT) attenuation correction (CTAC) acquisition protocols for pediatric positron emission tomography CT (PET CT). Methods: A GE Discovery 690 PET CT hybrid scanner was used to investigate the change to quantitative PET and CT measurements when operated at ultra-low doses (10–35 mAs). CT quantitation: noise, low-contrast resolution, and CT numbers for eleven tissue substitutes were analyzed in-phantom. CT quantitation was analyzed to a reduction of 90% CTDIvol (0.39/3.64; mGy) radiation dose from baseline. To minimize noise infiltration, 100% adaptive statistical iterative reconstruction (ASiR) was used for CT reconstruction. PET images were reconstructed withmore » the lower-dose CTAC iterations and analyzed for: maximum body weight standardized uptake value (SUVbw) of various diameter targets (range 8–37 mm), background uniformity, and spatial resolution. Radiation organ dose, as derived from patient exam size specific dose estimate (SSDE), was converted to effective dose using the standard ICRP report 103 method. Effective dose and CTAC noise magnitude were compared for 140 patient examinations (76 post-ASiR implementation) to determine relative patient population dose reduction and noise control. Results: CT numbers were constant to within 10% from the non-dose reduced CTAC image down to 90% dose reduction. No change in SUVbw, background percent uniformity, or spatial resolution for PET images reconstructed with CTAC protocols reconstructed with ASiR and down to 90% dose reduction. Patient population effective dose analysis demonstrated relative CTAC dose reductions between 62%–86% (3.2/8.3−0.9/6.2; mSv). Noise magnitude in dose-reduced patient images increased but was not statistically different from pre dose-reduced patient images. Conclusion: Using ASiR allowed for aggressive reduction in CTAC dose with no change in PET reconstructed images while maintaining sufficient image quality for co-localization of hybrid CT anatomy and PET radioisotope uptake.« less

  3. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    PubMed Central

    Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389

  4. Jones matrix polarization-correlation mapping of biological crystals networks

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Vanchuliak, O.; Motrich, A. V.; Gorsky, M. P.; Meglinskiy, I.; Marchuk, Yu. F.

    2017-08-01

    It has been proposed the optical model of Jones-matrix description of mechanisms of optical anisotropy of polycrystalline films of human bile, namely optical activity and birefringence. The algorithm of reconstruction of distributions of parameters - optical rotation angles and phase shifts of the indicated anisotropy types has been elaborated. The objective criteria of differentiation of bile films taken from healthy donors and patients with cholelithiasis by means of statistic analysis of such distributions have been determined. The operational characteristics (sensitivity, specificity and accuracy) of Jones-matrix reconstruction method of optical anisotropy parameters were defined.

  5. WE-G-18A-01: JUNIOR INVESTIGATOR WINNER - Low-Dose C-Arm Cone-Beam CT with Model-Based Image Reconstruction for High-Quality Guidance of Neurosurgical Intervention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Stayman, J; Otake, Y

    Purpose: To address the challenges of image quality, radiation dose, and reconstruction speed in intraoperative cone-beam CT (CBCT) for neurosurgery by combining model-based image reconstruction (MBIR) with accelerated algorithmic and computational methods. Methods: Preclinical studies involved a mobile C-arm for CBCT imaging of two anthropomorphic head phantoms that included simulated imaging targets (ventricles, soft-tissue structures/bleeds) and neurosurgical procedures (deep brain stimulation (DBS) electrode insertion) for assessment of image quality. The penalized likelihood (PL) framework was used for MBIR, incorporating a statistical model with image regularization via an edgepreserving penalty. To accelerate PL reconstruction, the ordered-subset, separable quadratic surrogates (OS-SQS) algorithmmore » was modified to incorporate Nesterov's method and implemented on a multi-GPU system. A fair comparison of image quality between PL and conventional filtered backprojection (FBP) was performed by selecting reconstruction parameters that provided matched low-contrast spatial resolution. Results: CBCT images of the head phantoms demonstrated that PL reconstruction improved image quality (∼28% higher CNR) even at half the radiation dose (3.3 mGy) compared to FBP. A combination of Nesterov's method and fast projectors yielded a PL reconstruction run-time of 251 sec (cf., 5729 sec for OS-SQS, 13 sec for FBP). Insertion of a DBS electrode resulted in severe metal artifact streaks in FBP reconstructions, whereas PL was intrinsically robust against metal artifact. The combination of noise and artifact was reduced from 32.2 HU in FBP to 9.5 HU in PL, thereby providing better assessment of device placement and potential complications. Conclusion: The methods can be applied to intraoperative CBCT for guidance and verification of neurosurgical procedures (DBS electrode insertion, biopsy, tumor resection) and detection of complications (intracranial hemorrhage). Significant improvement in image quality, dose reduction, and reconstruction time of ∼4 min will enable practical deployment of low-dose C-arm CBCT within the operating room. AAPM Research Seed Funding (2013-2014); NIH Fellowship F32EB017571; Siemens Healthcare (XP Division)« less

  6. Reconstructing Macroeconomics Based on Statistical Physics

    NASA Astrophysics Data System (ADS)

    Aoki, Masanao; Yoshikawa, Hiroshi

    We believe that time has come to integrate the new approach based on statistical physics or econophysics into macroeconomics. Toward this goal, there must be more dialogues between physicists and economists. In this paper, we argue that there is no reason why the methods of statistical physics so successful in many fields of natural sciences cannot be usefully applied to macroeconomics that is meant to analyze the macroeconomy comprising a large number of economic agents. It is, in fact, weird to regard the macroeconomy as a homothetic enlargement of the representative micro agent. We trust the bright future of the new approach to macroeconomies based on statistical physics.

  7. On modeling the paleohydrologic response of closed-basin lakes to fluctuations in climate: Methods, applications, and implications

    NASA Astrophysics Data System (ADS)

    Liu, Ganming; Schwartz, Franklin W.

    2014-04-01

    Climate reconstructions using tree rings and lake sediments have contributed significantly to the understanding of Holocene climates. Approaches focused specifically on reconstructing the temporal water-level response of lakes, however, are much less developed. This paper describes a statistical correlation approach based on time series with Palmer Drought Severity Index (PDSI) values derived from instrumental records or tree rings as a basis for reconstructing stage hydrographs for closed-basin lakes. We use a distributed lag correlation model to calculate a variable, ωt that represents the water level of a lake at any time t as a result of integrated climatic forcing from preceding years. The method was validated using both synthetic and measured lake-stage data and the study found that a lake's "memory" of climate fades as time passes, following an exponential-decay function at rates determined by the correlation time lag. Calculated trends in ωt for Moon Lake, Rice Lake, and Lake Mina from A.D. 1401 to 1860 compared well with the established chronologies (salinity, moisture, and Mg/Ca ratios) reconstructed from sediments. This method provides an independent approach for developing high-resolution information on lake behaviors in preinstrumental times and has been able to identify problems of climate signal deterioration in sediment-based climate reconstructions in lakes with a long time lag.

  8. Digital Reconstruction of 3D Polydisperse Dry Foam

    NASA Astrophysics Data System (ADS)

    Chieco, A.; Feitosa, K.; Roth, A. E.; Korda, P. T.; Durian, D. J.

    2012-02-01

    Dry foam is a disordered packing of bubbles that distort into familiar polyhedral shapes. We have implemented a method that uses optical axial tomography to reconstruct the internal structure of a dry foam in three dimensions. The technique consists of taking a series of photographs of the dry foam against a uniformly illuminated background at successive angles. By summing the projections we create images of the foam cross section. Image analysis of the cross sections allows us to locate Plateau borders and vertices. The vertices are then connected according to Plateau's rules to reconstruct the internal structure of the foam. Using this technique we are able to visualize a large number of bubbles of real 3D foams and obtain statistics of faces and edges.

  9. Psychoactive Drugs in Plastic Surgery

    PubMed Central

    Davison, Steven P.; Hayes, Kylie D.

    2017-01-01

    Background: Psychoactive drug use is on the rise in the United States, with plastic surgery patients a potentially susceptible group. This study aimed to determine the incidence of cosmetic and reconstructive patients in our practice taking psychoactive drugs and to compare those values with the national average. Furthermore, we discuss the patient safety concerns when patients withhold their medical history information over the course of their treatment. Methods: Urban private plastic practice patients who underwent surgery in a closed practice from 2009 to 2016 were divided into cosmetic and reconstructive cohorts. Review for drug use was medical scripts, history, and Surescripts drug reporting. Extracted information includes age, race, procedure, psychoactive medications, and whether or not they stated a mental health diagnosis on their medical history forms. Only patients with complete records were included. Results: A total of 830 patients were included in statistical analysis. Due to minimal cohort number, 70 men were excluded, as there were no comparative national data. Our analysis found that 33.6% cosmetic patients and 46.3% reconstructive patients used at least one psychoactive drug. Conclusion: There is a statistically significant difference between psychoactive drug use at our practice compared with the general population and a significantly larger percentage of reconstructive patients taking drugs compared with the cosmetic cohort. PMID:28458985

  10. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  11. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  12. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of lp norm type

    NASA Astrophysics Data System (ADS)

    Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping

    2016-01-01

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  13. Tropospheric wet refractivity tomography using multiplicative algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Xiaoying, Wang; Ziqiang, Dai; Enhong, Zhang; Fuyang, K. E.; Yunchang, Cao; Lianchun, Song

    2014-01-01

    Algebraic reconstruction techniques (ART) have been successfully used to reconstruct the total electron content (TEC) of the ionosphere and in recent years be tentatively used in tropospheric wet refractivity and water vapor tomography in the ground-based GNSS technology. The previous research on ART used in tropospheric water vapor tomography focused on the convergence and relaxation parameters for various algebraic reconstruction techniques and rarely discussed the impact of Gaussian constraints and initial field on the iteration results. The existing accuracy evaluation parameters calculated from slant wet delay can only evaluate the resultant precision of the voxels penetrated by slant paths and cannot evaluate that of the voxels not penetrated by any slant path. The paper proposes two new statistical parameters Bias and RMS, calculated from wet refractivity of the total voxels, to improve the deficiencies of existing evaluation parameters and then discusses the effect of the Gaussian constraints and initial field on the convergence and tomography results in multiplicative algebraic reconstruction technique (MART) to reconstruct the 4D tropospheric wet refractivity field using simulation method.

  14. Multispectral x-ray CT: multivariate statistical analysis for efficient reconstruction

    NASA Astrophysics Data System (ADS)

    Kheirabadi, Mina; Mustafa, Wail; Lyksborg, Mark; Lund Olsen, Ulrik; Bjorholm Dahl, Anders

    2017-10-01

    Recent developments in multispectral X-ray detectors allow for an efficient identification of materials based on their chemical composition. This has a range of applications including security inspection, which is our motivation. In this paper, we analyze data from a tomographic setup employing the MultiX detector, that records projection data in 128 energy bins covering the range from 20 to 160 keV. Obtaining all information from this data requires reconstructing 128 tomograms, which is computationally expensive. Instead, we propose to reduce the dimensionality of projection data prior to reconstruction and reconstruct from the reduced data. We analyze three linear methods for dimensionality reduction using a dataset with 37 equally-spaced projection angles. Four bottles with different materials are recorded for which we are able to obtain similar discrimination of their content using a very reduced subset of tomograms compared to the 128 tomograms that would otherwise be needed without dimensionality reduction.

  15. An Australasian hockey stick and associated climate wars

    NASA Astrophysics Data System (ADS)

    Karoly, David; Gergis, Joelle; Neukom, Raphael; Gallant, Ailie

    2017-04-01

    Multiproxy warm season (September-February) temperature reconstructions are presented for the combined land-ocean region of Australasia (0°-50°S, 110°E-180°) covering the last millennium (1000-2001CE). Using between 2 (R2) and 28 (R28) paleoclimate records, four 1000-member ensemble reconstructions of regional temperature are developed using four different statistical methods: principal component regression (PCR), composite plus scale (CPS), Bayesian hierarchical models (LNA), and pairwise comparison (PaiCo). The reconstructions are then compared with a three-member ensemble of GISS-E2-R climate model simulations and independent paleoclimate records. Decadal fluctuations in Australasian temperatures are remarkably similar between the four reconstruction methods. There are, however, differences in the amplitude of temperature variations between the different statistical methods and proxy networks. When the largest R28 network is used, the warmest 30-yr periods occur after 1950 in 77% of ensemble members over all methods. However, reconstructions based on only the longest records (R2 and R3 networks) indicate that single 30- and 10-yr periods of similar or slightly higher temperatures than in the late twentieth century may have occurred during the first half of the millennium. Regardless, the most recent instrumental temperatures (1985-2014) are above the 90th percentile of all 12 reconstruction ensembles (four reconstruction methods based on three proxy networks — R28, R3, and R2). An earlier manuscript describing this study and its results was accepted for publication in the Journal of Climate in May 2012, after two thorough rounds of review. However, as described by Gergis (2016), after the early online release of the paper, a typo in the methods section was identified. While the paper said the study had used "detrended" data - observed temperature data from which the longer-term trends had been removed - the study had in fact used raw data. Both raw and detrended data have been used in similar studies, and both are scientifically justifiable approaches. Instead of taking the easy way out and just correcting the single word in the page proof, we asked the publisher to put our paper on hold and remove the online version while we assessed the influence that the different method had on the results. Gergis (2016) describes the saga of attacks on the study and the authors by bloggers and online experts over the next four years, until the manuscript was finally accepted and published in July 2016 following a further three rounds of peer review and four new reviewers. This is another cautionary tale of the climate wars described by Mike Mann, efforts to discredit studies showing that recent large-scale warming is very likely outside the range of natural climate variability over the last millennium. Gergis, J., R. Neukom, A. J. E. Gallant and D. J. Karoly (2016) Australasian temperature reconstructions spanning the last millennium. J Clim., 29, 5365-5392. Gergis, J., (2016) How a single word sparked a four year sage of climate fact checking and blog backlash. The Conversation, 11 July 2016. https://theconversation.com/how-a-single-word-sparked-a-four-year-saga-of-climate-fact-checking-and-blog-backlash-62174

  16. Downscaling and hydrological uncertainties in 20th century hydrometeorological reconstructions over France

    NASA Astrophysics Data System (ADS)

    Vidal, Jean-Philippe; Caillouet, Laurie; Dayon, Gildas; Boé, Julien; Sauquet, Eric; Thirel, Guillaume; Graff, Benjamin

    2017-04-01

    The record length of streamflow observations is generally limited to the last 50 years, which is not enough to properly explore the natural hydrometeorological variability, a key to better understand the effects of anthropogenic climate change. This work proposes a comparison of different hydrometeorological reconstruction datasets over France built on the downscaling of the NOAA 20th century global extended reanalysis (20CR, Compo et al., 2011). It aims at assessing the uncertainties related to these reconstructions and improving our knowledge of the multi-decadal hydrometeorological variability over the 20th century. High-resolution daily meteorological reconstructions over the period 1871-2012 are obtained with two statistical downscaling methods based on the analogue approach: the deterministic ANALOG method (Dayon et al., 2015) and the probabilistic SCOPE method (Caillouet et al., 2016). These reconstructions are then used as forcings for the GR6J lumped conceptual rainfall-runoff model and the SIM physically-based distributed hydrological model, in order to derive daily streamflow reconstructions over a set of around 70 reference near-natural catchments. Results show a large multi-decadal streamflow variability over the last 140 years, which is however relatively consistent over France. Empirical estimates of three types of uncertainty - structure of the downscaling method, small-scale internal variability, and hydrological model structure - show roughly equal contributions to the streamflow uncertainty at the annual time scale, with values as high as 20% of the interannual mean. Caillouet, L., Vidal, J.-P., Sauquet, E., and Graff, B.: Probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France, Clim. Past, 12, 635-662, doi:10.5194/cp-12-635-2016, 2016. Compo, G. P., Whitaker, J. S., Sardeshmukh, P. D., Matsui, N., Allan, R. J., Yin, X., Gleason, B. E., Vose, R. S., Rutledge, G., Bessemoulin, P., Brönnimann, S., Brunet, M., Crouthamel, R. I., Grant, A. N., Groisman, P. Y., Jones, P. D., Kruk, M. C., Kruger, A. C., Marshall, G. J., Maugeri, M., Mok, H. Y., Nordli, Ø., Ross, T. F., Trigo, R. M., Wang, X. L., Woodruff, S. D., and Worley, S. J.: The Twentieth Century Reanalysis Project. Q. J. R. Meteorol. Soc., 137, 1-28, doi:10.1002/qj.776, 2011. Dayon, G., Boé, J., and Martin, E.: Transferability in the future climate of a statistical downscaling method for precipitation in France, J. Geophys. Res.-Atmos., 120, 1023-1043, doi: 10.1002/2014JD022236, 2015.

  17. Restoration of the Patient-Specific Anatomy of the Proximal and Distal Parts of the Humerus: Statistical Shape Modeling Versus Contralateral Registration Method.

    PubMed

    Vlachopoulos, Lazaros; Lüthi, Marcel; Carrillo, Fabio; Gerber, Christian; Székely, Gábor; Fürnstahl, Philipp

    2018-04-18

    In computer-assisted reconstructive surgeries, the contralateral anatomy is established as the best available reconstruction template. However, existing intra-individual bilateral differences or a pathological, contralateral humerus may limit the applicability of the method. The aim of the study was to evaluate whether a statistical shape model (SSM) has the potential to predict accurately the pretraumatic anatomy of the humerus from the posttraumatic condition. Three-dimensional (3D) triangular surface models were extracted from the computed tomographic data of 100 paired cadaveric humeri without a pathological condition. An SSM was constructed, encoding the characteristic shape variations among the individuals. To predict the patient-specific anatomy of the proximal (or distal) part of the humerus with the SSM, we generated segments of the humerus of predefined length excluding the part to predict. The proximal and distal humeral prediction (p-HP and d-HP) errors, defined as the deviation of the predicted (bone) model from the original (bone) model, were evaluated. For comparison with the state-of-the-art technique, i.e., the contralateral registration method, we used the same segments of the humerus to evaluate whether the SSM or the contralateral anatomy yields a more accurate reconstruction template. The p-HP error (mean and standard deviation, 3.8° ± 1.9°) using 85% of the distal end of the humerus to predict the proximal humeral anatomy was significantly smaller (p = 0.001) compared with the contralateral registration method. The difference between the d-HP error (mean, 5.5° ± 2.9°), using 85% of the proximal part of the humerus to predict the distal humeral anatomy, and the contralateral registration method was not significant (p = 0.61). The restoration of the humeral length was not significantly different between the SSM and the contralateral registration method. SSMs accurately predict the patient-specific anatomy of the proximal and distal aspects of the humerus. The prediction errors of the SSM depend on the size of the healthy part of the humerus. The prediction of the patient-specific anatomy of the humerus is of fundamental importance for computer-assisted reconstructive surgeries.

  18. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  19. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  20. Position reconstruction in LUX

    NASA Astrophysics Data System (ADS)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.

    2018-02-01

    The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessed with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β- with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.

  1. Zero-crossing approach to high-resolution reconstruction in frequency-domain optical-coherence tomography.

    PubMed

    Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo

    2012-10-01

    We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.

  2. Alternative standardization approaches to improving streamflow reconstructions with ring-width indices of riparian trees

    USGS Publications Warehouse

    Meko, David M.; Friedman, Jonathan M.; Touchan, Ramzi; Edmondson, Jesse R.; Griffin, Eleanor R.; Scott, Julian A.

    2015-01-01

    Old, multi-aged populations of riparian trees provide an opportunity to improve reconstructions of streamflow. Here, ring widths of 394 plains cottonwood (Populus deltoids, ssp. monilifera) trees in the North Unit of Theodore Roosevelt National Park, North Dakota, are used to reconstruct streamflow along the Little Missouri River (LMR), North Dakota, US. Different versions of the cottonwood chronology are developed by (1) age-curve standardization (ACS), using age-stratified samples and a single estimated curve of ring width against estimated ring age, and (2) time-curve standardization (TCS), using a subset of longer ring-width series individually detrended with cubic smoothing splines of width against year. The cottonwood chronologies are combined with the first principal component of four upland conifer chronologies developed by conventional methods to investigate the possible value of riparian tree-ring chronologies for streamflow reconstruction of the LMR. Regression modeling indicates that the statistical signal for flow is stronger in the riparian cottonwood than in the upland chronologies. The flow signal from cottonwood complements rather than repeats the signal from upland conifers and is especially strong in young trees (e.g. 5–35 years). Reconstructions using a combination of cottonwoods and upland conifers are found to explain more than 50% of the variance of LMR flow over a 1935–1990 calibration period and to yield reconstruction of flow to 1658. The low-frequency component of reconstructed flow is sensitive to the choice of standardization method for the cottonwood. In contrast to the TCS version, the ACS reconstruction features persistent low flows in the 19th century. Results demonstrate the value to streamflow reconstruction of riparian cottonwood and suggest that more studies are needed to exploit the low-frequency streamflow signal in densely sampled age-stratified stands of riparian trees.

  3. Evaluating the effect of a third-party implementation of resolution recovery on the quality of SPECT bone scan imaging using visual grading regression.

    PubMed

    Hay, Peter D; Smith, Julie; O'Connor, Richard A

    2016-02-01

    The aim of this study was to evaluate the benefits to SPECT bone scan image quality when applying resolution recovery (RR) during image reconstruction using software provided by a third-party supplier. Bone SPECT data from 90 clinical studies were reconstructed retrospectively using software supplied independent of the gamma camera manufacturer. The current clinical datasets contain 120×10 s projections and are reconstructed using an iterative method with a Butterworth postfilter. Five further reconstructions were created with the following characteristics: 10 s projections with a Butterworth postfilter (to assess intraobserver variation); 10 s projections with a Gaussian postfilter with and without RR; and 5 s projections with a Gaussian postfilter with and without RR. Two expert observers were asked to rate image quality on a five-point scale relative to our current clinical reconstruction. Datasets were anonymized and presented in random order. The benefits of RR on image scores were evaluated using ordinal logistic regression (visual grading regression). The application of RR during reconstruction increased the probability of both observers of scoring image quality as better than the current clinical reconstruction even where the dataset contained half the normal counts. Type of reconstruction and observer were both statistically significant variables in the ordinal logistic regression model. Visual grading regression was found to be a useful method for validating the local introduction of technological developments in nuclear medicine imaging. RR, as implemented by the independent software supplier, improved bone SPECT image quality when applied during image reconstruction. In the majority of clinical cases, acquisition times for bone SPECT intended for the purposes of localization can safely be halved (from 10 s projections to 5 s) when RR is applied.

  4. Orbital Floor Reconstruction with Free Flaps after Maxillectomy

    PubMed Central

    Sampathirao, Leela Mohan C. S. R.; Thankappan, Krishnakumar; Duraisamy, Sriprakash; Hedne, Naveen; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2013-01-01

    Background The purpose of this study is to evaluate the outcome of orbital floor reconstruction with free flaps after maxillectomy. Methods This was a retrospective analysis of 34 consecutive patients who underwent maxillectomy with orbital floor removal for malignancies, reconstructed with free flaps. A cross-sectional survey to assess the functional and esthetic outcome was done in 28 patients who were alive and disease-free, with a minimum of 6 months of follow-up. Results Twenty-six patients had bony reconstruction, and eight had soft tissue reconstruction. Free fibula flap was the commonest flap used (n = 14). Visual acuity was normal in 86%. Eye movements were normal in 92%. Abnormal globe position resulted in nine patients. Esthetic satisfaction was good in 19 patients (68%). Though there was no statistically significant difference in outcome of visual acuity, eye movement, and patient esthetic satisfaction between patients with bony and soft tissue reconstruction, more patients without bony reconstruction had abnormal globe position (p = 0.040). Conclusion Free tissue transfer has improved the results of orbital floor reconstruction after total maxillectomy, preserving the eye. Good functional and esthetic outcome was achieved. Though our study favors a bony orbital reconstruction, a larger study with adequate power and equal distribution of patients among the groups would be needed to determine this. Free fibula flap remains the commonest choice when a bony reconstruction is contemplated. PMID:24436744

  5. Orbital floor reconstruction with free flaps after maxillectomy.

    PubMed

    Sampathirao, Leela Mohan C S R; Thankappan, Krishnakumar; Duraisamy, Sriprakash; Hedne, Naveen; Sharma, Mohit; Mathew, Jimmy; Iyer, Subramania

    2013-06-01

    Background The purpose of this study is to evaluate the outcome of orbital floor reconstruction with free flaps after maxillectomy. Methods This was a retrospective analysis of 34 consecutive patients who underwent maxillectomy with orbital floor removal for malignancies, reconstructed with free flaps. A cross-sectional survey to assess the functional and esthetic outcome was done in 28 patients who were alive and disease-free, with a minimum of 6 months of follow-up. Results Twenty-six patients had bony reconstruction, and eight had soft tissue reconstruction. Free fibula flap was the commonest flap used (n = 14). Visual acuity was normal in 86%. Eye movements were normal in 92%. Abnormal globe position resulted in nine patients. Esthetic satisfaction was good in 19 patients (68%). Though there was no statistically significant difference in outcome of visual acuity, eye movement, and patient esthetic satisfaction between patients with bony and soft tissue reconstruction, more patients without bony reconstruction had abnormal globe position (p = 0.040). Conclusion Free tissue transfer has improved the results of orbital floor reconstruction after total maxillectomy, preserving the eye. Good functional and esthetic outcome was achieved. Though our study favors a bony orbital reconstruction, a larger study with adequate power and equal distribution of patients among the groups would be needed to determine this. Free fibula flap remains the commonest choice when a bony reconstruction is contemplated.

  6. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  7. Optimizing Aesthetic Outcomes in Delayed Breast Reconstruction

    PubMed Central

    2017-01-01

    Background: The need to restore both the missing breast volume and breast surface area makes achieving excellent aesthetic outcomes in delayed breast reconstruction especially challenging. Autologous breast reconstruction can be used to achieve both goals. The aim of this study was to identify surgical maneuvers that can optimize aesthetic outcomes in delayed breast reconstruction. Methods: This is a retrospective review of operative and clinical records of all patients who underwent unilateral or bilateral delayed breast reconstruction with autologous tissue between April 2014 and January 2017. Three groups of delayed breast reconstruction patients were identified based on patient characteristics. Results: A total of 26 flaps were successfully performed in 17 patients. Key surgical maneuvers for achieving aesthetically optimal results were identified. A statistically significant difference for volume requirements was identified in cases where a delayed breast reconstruction and a contralateral immediate breast reconstruction were performed simultaneously. Conclusions: Optimal aesthetic results can be achieved with: (1) restoration of breast skin envelope with tissue expansion when possible, (2) optimal positioning of a small skin paddle to be later incorporated entirely into a nipple areola reconstruction when adequate breast skin surface area is present, (3) limiting the reconstructed breast mound to 2 skin tones when large area skin resurfacing is required, (4) increasing breast volume by deepithelializing, not discarding, the inferior mastectomy flap skin, (5) eccentric division of abdominal flaps when an immediate and delayed bilateral breast reconstructions are performed simultaneously; and (6) performing second-stage breast reconstruction revisions and fat grafting. PMID:28894666

  8. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    PubMed

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  9. Deep learning for low-dose CT

    NASA Astrophysics Data System (ADS)

    Chen, Hu; Zhang, Yi; Zhou, Jiliu; Wang, Ge

    2017-09-01

    Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.

  10. SCENERY: a web application for (causal) network reconstruction from cytometry data.

    PubMed

    Papoutsoglou, Georgios; Athineou, Giorgos; Lagani, Vincenzo; Xanthopoulos, Iordanis; Schmidt, Angelika; Éliás, Szabolcs; Tegnér, Jesper; Tsamardinos, Ioannis

    2017-07-03

    Flow and mass cytometry technologies can probe proteins as biological markers in thousands of individual cells simultaneously, providing unprecedented opportunities for reconstructing networks of protein interactions through machine learning algorithms. The network reconstruction (NR) problem has been well-studied by the machine learning community. However, the potentials of available methods remain largely unknown to the cytometry community, mainly due to their intrinsic complexity and the lack of comprehensive, powerful and easy-to-use NR software implementations specific for cytometry data. To bridge this gap, we present Single CEll NEtwork Reconstruction sYstem (SCENERY), a web server featuring several standard and advanced cytometry data analysis methods coupled with NR algorithms in a user-friendly, on-line environment. In SCENERY, users may upload their data and set their own study design. The server offers several data analysis options categorized into three classes of methods: data (pre)processing, statistical analysis and NR. The server also provides interactive visualization and download of results as ready-to-publish images or multimedia reports. Its core is modular and based on the widely-used and robust R platform allowing power users to extend its functionalities by submitting their own NR methods. SCENERY is available at scenery.csd.uoc.gr or http://mensxmachina.org/en/software/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Accelerating separable footprint (SF) forward and back projection on GPU

    NASA Astrophysics Data System (ADS)

    Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James

    2017-03-01

    Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.

  12. Evaluation of dynamic row-action maximum likelihood algorithm reconstruction for quantitative 15O brain PET.

    PubMed

    Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi

    2009-09-01

    A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.

  13. Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.

    Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan wasmore » performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30×30 pixel ROIs (12.9 × 12.9 mm{sup 2}). In comparison, low dose FBP flow errors were −38% to +258%, and standard deviation was 6%–93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR{sup 2} per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose.« less

  14. What can the annual 10Be solar activity reconstructions tell us about historic space weather?

    NASA Astrophysics Data System (ADS)

    Barnard, Luke; McCracken, Ken G.; Owens, Mat J.; Lockwood, Mike

    2018-04-01

    Context: Cosmogenic isotopes provide useful estimates of past solar magnetic activity, constraining past space climate with reasonable uncertainty. Much less is known about past space weather conditions. Recent advances in the analysis of 10Be by McCracken & Beer (2015, Sol Phys 290: 305-3069) (MB15) suggest that annually resolved 10Be can be significantly affected by solar energetic particle (SEP) fluxes. This poses a problem, and presents an opportunity, as the accurate quantification of past solar magnetic activity requires the SEP effects to be determined and isolated, whilst doing so might provide a valuable record of past SEP fluxes. Aims: We compare the MB15 reconstruction of the heliospheric magnetic field (HMF), with two independent estimates of the HMF derived from sunspot records and geomagnetic variability. We aim to quantify the differences between the HMF reconstructions, and speculate on the origin of these differences. We test whether the differences between the reconstructions appear to depend on known significant space weather events. Methods: We analyse the distributions of the differences between the HMF reconstructions. We consider how the differences vary as a function of solar cycle phase, and, using a Kolmogorov-Smirnov test, we compare the distributions under the two conditions of whether or not large space weather events were known to have occurred. Results: We find that the MB15 reconstructions are generally marginally smaller in magnitude than the sunspot and geomagnetic HMF reconstructions. This bias varies as a function of solar cycle phase, and is largest in the declining phase of the solar cycle. We find that MB15's excision of the years with very large ground level enhancement (GLE) improves the agreement of the 10Be HMF estimate with the sunspot and geomagnetic reconstructions. We find no statistical evidence that GLEs, in general, affect the MB15 reconstruction, but this analysis is limited by having too few samples. We do find evidence that the MB15 reconstructions appear statistically different in years with great geomagnetic storms.

  15. Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography

    PubMed Central

    Sánchez, Adrian A.

    2016-01-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968

  16. Estimation of noise properties for TV-regularized image reconstruction in computed tomography.

    PubMed

    Sánchez, Adrian A

    2015-09-21

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  17. Estimation of noise properties for TV-regularized image reconstruction in computed tomography

    NASA Astrophysics Data System (ADS)

    Sánchez, Adrian A.

    2015-09-01

    A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.

  18. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less

  19. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  20. Rough surface reconstruction for ultrasonic NDE simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Wonjae; Shi, Fan; Lowe, Michael J. S.

    2014-02-18

    The reflection of ultrasound from rough surfaces is an important topic for the NDE of safety-critical components, such as pressure-containing components in power stations. The specular reflection from a rough surface of a defect is normally lower than it would be from a flat surface, so it is typical to apply a safety factor in order that justification cases for inspection planning are conservative. The study of the statistics of the rough surfaces that might be expected in candidate defects according to materials and loading, and the reflections from them, can be useful to develop arguments for realistic safety factors.more » This paper presents a study of real rough crack surfaces that are representative of the potential defects in pressure-containing power plant. Two-dimensional (area) values of the height of the roughness have been measured and their statistics analysed. Then a means to reconstruct model cases with similar statistics, so as to enable the creation of multiple realistic realizations of the surfaces, has been investigated, using random field theory. Rough surfaces are reconstructed, based on a real surface, and results for these two-dimensional descriptions of the original surface have been compared with those from the conventional model based on a one-dimensional correlation coefficient function. In addition, ultrasonic reflections from them are simulated using a finite element method.« less

  1. A Universal Trend among Proteomes Indicates an Oily Last Common Ancestor

    PubMed Central

    Mannige, Ranjan V.; Brooks, Charles L.; Shakhnovich, Eugene I.

    2012-01-01

    Despite progresses in ancestral protein sequence reconstruction, much needs to be unraveled about the nature of the putative last common ancestral proteome that served as the prototype of all extant lifeforms. Here, we present data that indicate a steady decline (oil escape) in proteome hydrophobicity over species evolvedness (node number) evident in 272 diverse proteomes, which indicates a highly hydrophobic (oily) last common ancestor (LCA). This trend, obtained from simple considerations (free from sequence reconstruction methods), was corroborated by regression studies within homologous and orthologous protein clusters as well as phylogenetic estimates of the ancestral oil content. While indicating an inherent irreversibility in molecular evolution, oil escape also serves as a rare and universal reaction-coordinate for evolution (reinforcing Darwin's principle of Common Descent), and may prove important in matters such as (i) explaining the emergence of intrinsically disordered proteins, (ii) developing composition- and speciation-based “global” molecular clocks, and (iii) improving the statistical methods for ancestral sequence reconstruction. PMID:23300421

  2. An experimental study on the application of radionuclide imaging in repair of the bone defect

    PubMed Central

    Zhu, Weimin; Wang, Daping; Zhang, Xiaojun; Lu, Wei; Liu, Jianquan; Peng, Liangquan; Li, Hao; Han, Yun; Zeng, Yanjun

    2011-01-01

    The aim of our study was to validate the effect of radionuclide imaging in early monitoring of the bone’s reconstruction, the animal model of bone defect was made on the rabbits repaired with HA artificial bone. The ability of bone defect repair was evaluated by using radionuclide bone imaging at 2, 4, 8 and 12 weeks postoperatively. The results indicate that the experimental group stimulated more bone formation than that of the control group. The differences of the bone reconstruction ability were statistically significant (p<0.05). The nano-HA artificial has good bone conduction, and it can be used for the treatment of bone defects. Radionuclide imaging may be an effective and first choice method for the early monitoring of the bone’s reconstruction. PMID:21875418

  3. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  4. Ice Mass Change in Greenland and Antarctica Between 1993 and 2013 from Satellite Gravity Measurements

    NASA Technical Reports Server (NTRS)

    Talpe, Matthieu J.; Nerem, R. Steven; Forootan, Ehsan; Schmidt, Michael; Lemoine, Frank G.; Enderlin, Ellyn M.; Landerer, Felix W.

    2017-01-01

    We construct long-term time series of Greenland and Antarctic ice sheet mass change from satellite gravity measurements. A statistical reconstruction approach is developed based on a principal component analysis (PCA) to combine high-resolution spatial modes from the Gravity Recovery and Climate Experiment (GRACE) mission with the gravity information from conventional satellite tracking data. Uncertainties of this reconstruction are rigorously assessed; they include temporal limitations for short GRACE measurements, spatial limitations for the low-resolution conventional tracking data measurements, and limitations of the estimated statistical relationships between low- and high-degree potential coefficients reflected in the PCA modes. Trends of mass variations in Greenland and Antarctica are assessed against a number of previous studies. The resulting time series for Greenland show a higher rate of mass loss than other methods before 2000, while the Antarctic ice sheet appears heavily influenced by interannual variations.

  5. Paediatric cardiac CT examinations: impact of the iterative reconstruction method ASIR on image quality--preliminary findings.

    PubMed

    Miéville, Frédéric A; Gudinchet, François; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Bochud, François O; Verdun, Francis R

    2011-09-01

    Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.

  6. Cup Implant Planning Based on 2-D/3-D Radiographic Pelvis Reconstruction-First Clinical Results.

    PubMed

    Schumann, Steffen; Sato, Yoshinobu; Nakanishi, Yuki; Yokota, Futoshi; Takao, Masaki; Sugano, Nobuhiko; Zheng, Guoyan

    2015-11-01

    In the following, we will present a newly developed X-ray calibration phantom and its integration for 2-D/3-D pelvis reconstruction and subsequent automatic cup planning. Two different planning strategies were applied and evaluated with clinical data. Two different cup planning methods were investigated: The first planning strategy is based on a combined pelvis and cup statistical atlas. Thereby, the pelvis part of the combined atlas is matched to the reconstructed pelvis model, resulting in an optimized cup planning. The second planning strategy analyzes the morphology of the reconstructed pelvis model to determine the best fitting cup implant. The first planning strategy was compared to 3-D CT-based planning. Digitally reconstructed radiographs of THA patients with differently severe pathologies were used to evaluate the accuracy of predicting the cup size and position. Within a discrepancy of one cup size, the size was correctly identified in 100% of the cases for Crowe type I datasets and in 77.8% of the cases for Crowe type II, III, and IV datasets. The second planning strategy was analyzed with respect to the eventually implanted cup size. In seven patients, the estimated cup diameter was correct within one cup size, while the estimation for the remaining five patients differed by two cup sizes. While both planning strategies showed the same prediction rate with a discrepancy of one cup size (87.5%), the prediction of the exact cup size was increased for the statistical atlas-based strategy (56%) in contrast to the anatomically driven approach (37.5%). The proposed approach demonstrated the clinical validity of using 2-D/3-D reconstruction technique for cup planning.

  7. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  8. Can use of adaptive statistical iterative reconstruction reduce radiation dose in unenhanced head CT? An analysis of qualitative and quantitative image quality

    PubMed Central

    Heggen, Kristin Livelten; Pedersen, Hans Kristian; Andersen, Hilde Kjernlie; Martinsen, Anne Catrine T

    2016-01-01

    Background Iterative reconstruction can reduce image noise and thereby facilitate dose reduction. Purpose To evaluate qualitative and quantitative image quality for full dose and dose reduced head computed tomography (CT) protocols reconstructed using filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR). Material and Methods Fourteen patients undergoing follow-up head CT were included. All patients underwent full dose (FD) exam and subsequent 15% dose reduced (DR) exam, reconstructed using FBP and 30% ASIR. Qualitative image quality was assessed using visual grading characteristics. Quantitative image quality was assessed using ROI measurements in cerebrospinal fluid (CSF), white matter, peripheral and central gray matter. Additionally, quantitative image quality was measured in Catphan and vendor’s water phantom. Results There was no significant difference in qualitative image quality between FD FBP and DR ASIR. Comparing same scan FBP versus ASIR, a noise reduction of 28.6% in CSF and between −3.7 and 3.5% in brain parenchyma was observed. Comparing FD FBP versus DR ASIR, a noise reduction of 25.7% in CSF, and −7.5 and 6.3% in brain parenchyma was observed. Image contrast increased in ASIR reconstructions. Contrast-to-noise ratio was improved in DR ASIR compared to FD FBP. In phantoms, noise reduction was in the range of 3 to 28% with image content. Conclusion There was no significant difference in qualitative image quality between full dose FBP and dose reduced ASIR. CNR improved in DR ASIR compared to FD FBP mostly due to increased contrast, not reduced noise. Therefore, we recommend using caution if reducing dose and applying ASIR to maintain image quality. PMID:27583169

  9. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  10. Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves.

    PubMed

    Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J

    2012-02-01

    The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.

  11. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  12. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  13. A GATE evaluation of the sources of error in quantitative {sup 90}Y PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydhorst, Jared, E-mail: jared.strydhorst@gmail.

    Purpose: Accurate reconstruction of the dose delivered by {sup 90}Y microspheres using a postembolization PET scan would permit the establishment of more accurate dose–response relationships for treatment of hepatocellular carcinoma with {sup 90}Y. However, the quality of the PET data obtained is compromised by several factors, including poor count statistics and a very high random fraction. This work uses Monte Carlo simulations to investigate what impact factors other than low count statistics have on the quantification of {sup 90}Y PET. Methods: PET acquisitions of two phantoms—a NEMA PET phantom and the NEMA IEC PET body phantom-containing either {sup 90}Y ormore » {sup 18}F were simulated using GATE. Simulated projections were created with subsets of the simulation data allowing the contributions of random, scatter, and LSO background to be independently evaluated. The simulated projections were reconstructed using the commercial software for the simulated scanner, and the quantitative accuracy of the reconstruction and the contrast recovery of the reconstructed images were evaluated. Results: The quantitative accuracy of the {sup 90}Y reconstructions were not strongly influenced by the high random fraction present in the projection data, and the activity concentration was recovered to within 5% of the known value. The contrast recovery measured for simulated {sup 90}Y data was slightly poorer than that for simulated {sup 18}F data with similar count statistics. However, the degradation was not strongly linked to any particular factor. Using a more restricted energy range to reduce the random fraction in the projections had no significant effect. Conclusions: Simulations of {sup 90}Y PET confirm that quantitative {sup 90}Y is achievable with the same approach as that used for {sup 18}F, and that there is likely very little margin for improvement by attempting to model aspects unique to {sup 90}Y, such as the much higher random fraction or the presence of bremsstrahlung in the singles data.« less

  14. SU-D-207B-04: Morphological Features of MRI as a Correlate of Capsular Contracture in Breast Cancer Patients with Implant-Based Reconstructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyagi, N; Sutton, E; Hunt, M

    Purpose: Capsular contracture (CC) is a serious complication in patients receiving implant-based reconstruction for breast cancer. The goal of this study was to identify image-based correlates of CC using MRI imaging in breast cancer patients who received both MRI and clinical evaluation following reconstructive surgery. Methods: We analyzed a retrospective dataset of 50 patients who had both a diagnostic MR and a plastic surgeon’s evaluations of CC score (Baker’s score) within a six month period following mastectomy and reconstructive surgery. T2w sagittal MRIs (TR/TE = 3500/102 ms, slice thickness = 4 mm) were used for morphological shape features (roundness, eccentricity,more » solidity, extent and ratio-length) and histogram features (median, skewness and kurtosis) of the implant and the pectoralis muscle overlying the implant. Implant and pectoralis muscles were segmented in 3D using Computation Environment for Radiological Research (CERR) and shape and histogram features were calculated as a function of Baker’s score. Results: Shape features such as roundness and eccentricity were statistically significant in differentiating grade 1 and grade 2 (p = 0.009; p = 0.06) as well as grade 1 and grade 3 CC (p = 0.001; p = 0.006). Solidity and extent were statistically significant in differentiating grade 1 and grade 3 CC (p = 0.04; p = 0.04). Ratio-length was statistically significant in differentiating all grades of CC except grade 2 and grade 3 that showed borderline significance (p = 0.06). The muscle thickness, median intensity and kurtosis were significant in differentiating between grade 1 and grade 3 (p = 0.02), grade 1 and grade 2 (p = 0.03) and grade 1 and grade 3 (p = 0.01) respectively. Conclusion: Morphological shape features described on MR images were associated with the severity of CC. MRI may be important in objectively evaluating outcomes in breast cancer patients who undergo implant reconstruction.« less

  15. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  16. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  17. Impact of PET/CT system, reconstruction protocol, data analysis method, and repositioning on PET/CT precision: An experimental evaluation using an oncology and brain phantom.

    PubMed

    Mansor, Syahir; Pfaehler, Elisabeth; Heijtel, Dennis; Lodge, Martin A; Boellaard, Ronald; Yaqub, Maqsood

    2017-12-01

    In longitudinal oncological and brain PET/CT studies, it is important to understand the repeatability of quantitative PET metrics in order to assess change in tracer uptake. The present studies were performed in order to assess precision as function of PET/CT system, reconstruction protocol, analysis method, scan duration (or image noise), and repositioning in the field of view. Multiple (repeated) scans have been performed using a NEMA image quality (IQ) phantom and a 3D Hoffman brain phantom filled with 18 F solutions on two systems. Studies were performed with and without randomly (< 2 cm) repositioning the phantom and all scans (12 replicates for IQ phantom and 10 replicates for Hoffman brain phantom) were performed at equal count statistics. For the NEMA IQ phantom, we studied the recovery coefficients (RC) of the maximum (SUV max ), peak (SUV peak ), and mean (SUV mean ) uptake in each sphere as a function of experimental conditions (noise level, reconstruction settings, and phantom repositioning). For the 3D Hoffman phantom, the mean activity concentration was determined within several volumes of interest and activity recovery and its precision was studied as function of experimental conditions. The impact of phantom repositioning on RC precision was mainly seen on the Philips Ingenuity PET/CT, especially in the case of smaller spheres (< 17 mm diameter, P < 0.05). This effect was much smaller for the Siemens Biograph system. When exploring SUV max , SUV peak , or SUV mean of the spheres in the NEMA IQ phantom, it was observed that precision depended on phantom repositioning, reconstruction algorithm, and scan duration, with SUV max being most and SUV peak least sensitive to phantom repositioning. For the brain phantom, regional averaged SUVs were only minimally affected by phantom repositioning (< 2 cm). The precision of quantitative PET metrics depends on the combination of reconstruction protocol, data analysis methods and scan duration (scan statistics). Moreover, precision was also affected by phantom repositioning but its impact depended on the data analysis method in combination with the reconstructed voxel size (tissue fraction effect). This study suggests that for oncological PET studies the use of SUV peak may be preferred over SUV max because SUV peak is less sensitive to patient repositioning/tumor sampling. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. Investigation of statistical iterative reconstruction for dedicated breast CT

    PubMed Central

    Makeev, Andrey; Glick, Stephen J.

    2013-01-01

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318

  19. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  20. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  1. Ultra-Low-Dose Fetal CT With Model-Based Iterative Reconstruction: A Prospective Pilot Study.

    PubMed

    Imai, Rumi; Miyazaki, Osamu; Horiuchi, Tetsuya; Asano, Keisuke; Nishimura, Gen; Sago, Haruhiko; Nosaka, Shunsuke

    2017-06-01

    Prenatal diagnosis of skeletal dysplasia by means of 3D skeletal CT examination is highly accurate. However, it carries a risk of fetal exposure to radiation. Model-based iterative reconstruction (MBIR) technology can reduce radiation exposure; however, to our knowledge, the lower limit of an optimal dose is currently unknown. The objectives of this study are to establish ultra-low-dose fetal CT as a method for prenatal diagnosis of skeletal dysplasia and to evaluate the appropriate radiation dose for ultra-low-dose fetal CT. Relationships between tube current and image noise in adaptive statistical iterative reconstruction and MBIR were examined using a 32-cm CT dose index (CTDI) phantom. On the basis of the results of this examination and the recommended methods for the MBIR option and the known relationship between noise and tube current for filtered back projection, as represented by the expression SD = (milliamperes) -0.5 , the lower limit of the optimal dose in ultra-low-dose fetal CT with MBIR was set. The diagnostic power of the CT images obtained using the aforementioned scanning conditions was evaluated, and the radiation exposure associated with ultra-low-dose fetal CT was compared with that noted in previous reports. Noise increased in nearly inverse proportion to the square root of the dose in adaptive statistical iterative reconstruction and in inverse proportion to the fourth root of the dose in MBIR. Ultra-low-dose fetal CT was found to have a volume CTDI of 0.5 mGy. Prenatal diagnosis was accurately performed on the basis of ultra-low-dose fetal CT images that were obtained using this protocol. The level of fetal exposure to radiation was 0.7 mSv. The use of ultra-low-dose fetal CT with MBIR led to a substantial reduction in radiation exposure, compared with the CT imaging method currently used at our institution, but it still enabled diagnosis of skeletal dysplasia without reducing diagnostic power.

  2. UV-radiation in the past: Reconstruction and long-term changes in Austria

    NASA Astrophysics Data System (ADS)

    Hadzimustafic, J.; Simic, S.; Fitzka, M.

    2013-05-01

    Series of daily erythemal UV-dose are reconstructed for the last 30 years of the 20th century in Austria and its changes during that period with respect to observed changes in total ozone and cloud cover discussed. The reconstruction method is based on the relationship between long-term global radiation and sunshine duration records and existing measurements of erythemal UV at several locations. Through comparison with different data sources efforts are made to assure high data quality for all input parameters. The results for reconstructed daily sums show high correlations (0.95-0.99) with observed values compared on a yearly and seasonal basis throughout the overlapping period 1998-2010. Assessed from the reconstructed data, long-term variability of erythemal UV daily dose for two time periods has been quantified (1977-1995, 1996-2010). Special emphasis is put on the investigation of changes in UV due to observed trends in clouds and sunshine duration in the Austrian Alpine regions during the last decades. The earlier period shows significant changes between +4.1 %/dec and +6.9 %/dec at six stations in Austria, mainly due to significant decreases in total ozone column of up to -3.7 %/dec. Positive significant trends of around +2%/dec are found in cloud and aerosol modification factors at most of stations along with observed positive trends in sunshine duration, being statistically significant at eastern and southern stations. In spite of ozone layer recovery since the mid 1990s, the latter period does not reveal any statistically significant changes in erythemal UV irradiation.

  3. Robustness of Reconstructed Ancestral Protein Functions to Statistical Uncertainty.

    PubMed

    Eick, Geeta N; Bridgham, Jamie T; Anderson, Douglas P; Harms, Michael J; Thornton, Joseph W

    2017-02-01

    Hypotheses about the functions of ancient proteins and the effects of historical mutations on them are often tested using ancestral protein reconstruction (APR)-phylogenetic inference of ancestral sequences followed by synthesis and experimental characterization. Usually, some sequence sites are ambiguously reconstructed, with two or more statistically plausible states. The extent to which the inferred functions and mutational effects are robust to uncertainty about the ancestral sequence has not been studied systematically. To address this issue, we reconstructed ancestral proteins in three domain families that have different functions, architectures, and degrees of uncertainty; we then experimentally characterized the functional robustness of these proteins when uncertainty was incorporated using several approaches, including sampling amino acid states from the posterior distribution at each site and incorporating the alternative amino acid state at every ambiguous site in the sequence into a single "worst plausible case" protein. In every case, qualitative conclusions about the ancestral proteins' functions and the effects of key historical mutations were robust to sequence uncertainty, with similar functions observed even when scores of alternate amino acids were incorporated. There was some variation in quantitative descriptors of function among plausible sequences, suggesting that experimentally characterizing robustness is particularly important when quantitative estimates of ancient biochemical parameters are desired. The worst plausible case method appears to provide an efficient strategy for characterizing the functional robustness of ancestral proteins to large amounts of sequence uncertainty. Sampling from the posterior distribution sometimes produced artifactually nonfunctional proteins for sequences reconstructed with substantial ambiguity. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    PubMed Central

    Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-01-01

    Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477

  5. Remote sensing of environmental particulate pollutants - Optical methods for determinations of size distribution and complex refractive index

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1978-01-01

    A unifying approach, based on a generalization of Pearson's differential equation of statistical theory, is proposed for both the representation of particulate size distribution and the interpretation of radiometric measurements in terms of this parameter. A single-parameter gamma-type distribution is introduced, and it is shown that inversion can only provide the dimensionless parameter, r/ab (where r = particle radius, a = effective radius, b = effective variance), at least when the distribution vanishes at both ends. The basic inversion problem in reconstructing the particle size distribution is analyzed, and the existing methods are reviewed (with emphasis on their capabilities) and classified. A two-step strategy is proposed for simultaneously determining the complex refractive index and reconstructing the size distribution of atmospheric particulates.

  6. Internal Climatic Influences From Secular To Multi-decadal Scales: Comparison Of NAO Reconstructions.

    NASA Astrophysics Data System (ADS)

    Nicolle, M.; Debret, M.; Massei, N.; de Vernal, A.

    2017-12-01

    In the Northern Hemisphere, the North Atlantic Oscillation (NAO) is the major dominant mode of variability in winter atmospheric circulation, with large impacts on temperature, precipitation and storm tracks in the North Atlantic sector. To understand the role of this internal climatic oscillations on the past climate variability, several proxy-based reconstructions of the NAO were published during the last decades. Two of them are available during the past 1,200 years: a first NAO reconstruction published by Trouet et al. (2009) and a second proposed by Ortega et al. (2015). The major discrepancy between the two reconstructions concerns the transition period between the Medieval Climate Anomaly (MCA) and the Little Ice Age. The first NAO reconstruction shows persistent positive phases during the MCA (AD 1000-1300) but this dominant trend is not highlighted in the reconstruction proposed by Ortega et al. (2015), asking the question of the influence of predictors used to reconstruct the NAO signal during the last millennia. In these study, we compare the two NAO reconstructions in order to determine the effect of bi-proxy or multi-proxy approach on the signal reconstructed. Using statistical and wavelet analysis methods, we conclude that the number of predictors used do not have impact on the signal reconstruct. The two reconstructions signals are characterized by similar variabilities expressed from multi-decadal to multi-secular scales. The major trend difference seems to be link to the type of the predictor and particularly the use of Greenland ice cores in the reconstruction proposed in 2015.

  7. COMPARISON OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASIR™) AND MODEL-BASED ITERATIVE RECONSTRUCTION (VEO™) FOR PAEDIATRIC ABDOMINAL CT EXAMINATIONS: AN OBSERVER PERFORMANCE STUDY OF DIAGNOSTIC IMAGE QUALITY.

    PubMed

    Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne

    2016-06-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  9. Outcomes of Fat-Augmented Latissimus Dorsi (FALD) Flap Versus Implant-Based Latissimus Dorsi Flap for Delayed Post-radiation Breast Reconstruction.

    PubMed

    Demiri, Efterpi C; Dionyssiou, Dimitrios D; Tsimponis, Antonios; Goula, Christina-Olga; Pavlidis, Leonidas C; Spyropoulou, Georgia-Alexandra

    2018-06-01

    Although free abdominal flaps constitute the gold standard in post-radiation delayed breast reconstruction, latissimus dorsi-based methods offer alternative reconstructive options. This retrospective study aims to compare outcomes of delayed breast reconstruction using the fat-augmented latissimus dorsi (FALD) autologous reconstruction and the latissimus dorsi-plus-implant reconstruction in irradiated women. We reviewed the files of 47 post-mastectomy irradiated patients (aged 29-73 years), who underwent delayed latissimus dorsi-based breast reconstruction between 2010 and 2016. Twenty-three patients (Group A) had an extended FALD flap and twenty-four patients (Group B) an implant-based latissimus dorsi reconstruction. Patients' age, BMI, pregnancies, volume of injected fat, implant size, postoperative complications, and secondary surgical procedures were recorded and analyzed. Age, BMI, pregnancies, and donor-site complications were similar in both groups (p > 0.05). Mean fat volume injected initially was 254 cc (ranged 130-380 cc/session); mean implant volume was 323 cc (ranged 225-420 cc). Breast complications were significantly fewer in Group A (one wound dehiscence, two oily cysts) compared to Group B (three cases with wound dehiscence, two extrusions, thirteen severe capsular contractions). Non-statistically significant difference was documented for secondary procedures between groups; although the mean number of additional surgeries/patient was higher in Group A, they referred to secondary lipofilling, whereas in Group B they were revision surgeries for complications. The FALD flap constitutes an alternative method for delayed autologous reconstruction after post-mastectomy irradiation, avoiding implant-related complications. Although additional fat graft sessions might be required, it provides an ideal autogenous reconstructive option for thin nulliparous women, with a small opposite breast and adequate fat donor sites. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  10. Experimental Reconstructions of Surface Temperature using the PAGES 2k Network

    NASA Astrophysics Data System (ADS)

    Wang, Jianghao; Emile-Geay, Julien; Vaccaro, Adam; Guillot, Dominique; Rajaratnam, Bala

    2014-05-01

    Climate field reconstructions (CFRs) of the Common Era provide uniquely detailed characterizations of natural, low-frequency climate variability beyond the instrumental era. However, the accuracy and robustness of global-scale CFRs remains an open question. For instance, Wang et al. (2013) showed that CFRs are greatly method-dependent, highlighting the danger of forming dynamical interpretations based on a single reconstruction (e.g. Mann et al., 2009). This study will present a set of new reconstructions of global surface temperature and compare them with existing reconstructions from the IPCC AR5. The reconstructions are derived using the PAGES 2k network, which is composed of 501 high-resolution temperature-sensitive proxies from eight continental-scale regions (PAGES2K Consortium, 2013). Four CFR techniques are used to produce reconstructions, including RegEM-TTLS, the Mann et al. (2009) implementation of RegEM-TTLS (hereinafter M09-TTLS), CCA (Smerdon et al., 2010) and GraphEM (Guillot et al., submitted). First, we show that CFRs derived from the PAGES 2k network exhibit greater inter-method similarities than the same methods applied to the proxy network of Mann et al. (2009) (hereinafter M09 network). For instance, reconstructed NH mean temperature series using the PAGES 2k network are in better agreement over the last millennium than the M09-based reconstructions. Remarkably, for the reconstructed temperature difference between the Medieval Climate Anomaly and the Little Ice Age, the spatial patterns of the M09-based reconstructions are greatly divergent amongst methods. On the other hand, not a single PAGES 2k-based CFR displays the La Niña-like pattern found in Mann et al. (2009); rather, no systematic pattern emerges between the two epochs. Next, we quantify uncertainties associated with the PAGES 2k-based CFRs via ensemble methods, and show that GraphEM and CCA are less sensitive to random noise than RegEM-TTLS and M09-TTLS, consistent with pseudoproxy studies (Wang et al., 2014). The updated set of reconstructions, with uncertainties, will provide a broader context for the evaluation of the unusual character of the 20th century warming. The reconstructions will also be used to constrain fingerprinting analyses, which is particularly useful in discriminating between externally forced signals and internal variability. Reference: Guillot, D., B. Rajaratnam, and J. Emile-Geay, Statistical paleoclimate reconstructions via markov random fields, Ann. Appl. Stat., submitted. Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni, Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326 (5957), 1256-1260, 2009. PAGES2K Consortium, Continental-scale temperature variability during the past two millennia, Nature Geosci, 6(5), 339-346, 2013. Smerdon, J. E., A. Kaplan, D. Chang, and M. N. Evans, A pseudoproxy evaluation of the CCA and RegEM methods for reconstructing climate fields of the last millennium*, J. Clim., 23(18), 4856-4880, 2010. Wang, J., J. Emile-Geay, A. D. Vaccaro, and D. Guillot, Fragility of estimated spatial temperature patterns in climate field reconstructions of the Common Era, Abstract PP41B-03 presented at Fall Meeting, AGU, San Francisco, Calif., 2013. Wang, J., J. Emile-Geay, D. Guillot, J. Smerdon, and B. Rajaratnam, Evaluating climate field reconstruction techniques using improved emulations of real-world conditions, Clim.Past, 10(1), 1-19, 2014.

  11. Comment on 'Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: feasibility studies for range verification'.

    PubMed

    Sitek, Arkadiusz

    2016-12-21

    The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).

  12. Comment on ‘Imaging of prompt gamma rays emitted during delivery of clinical proton beams with a Compton camera: feasibility studies for range verification’

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2016-12-01

    The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).

  13. Position reconstruction in LUX

    DOE PAGES

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; ...

    2018-02-01

    The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessedmore » with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β - with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV). Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. Lastly, the reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.« less

  14. Position reconstruction in LUX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.

    The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessedmore » with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β - with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV). Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. Lastly, the reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.« less

  15. Position reconstruction in LUX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.

    © 2018 IOP Publishing Ltd and Sissa Medialab. The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe themore » response of the PMTs. The reconstruction was assessed with calibration data including 83m Kr (releasing a total energy of 41.5 keV) and 3 H (andbeta; - with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of andsigma; = 0.82 cm and andsigma; = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies andsim;0.25 keV and andsim;10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.« less

  16. Position reconstruction in LUX

    DOE PAGES

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; ...

    2018-02-01

    © 2018 IOP Publishing Ltd and Sissa Medialab. The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe themore » response of the PMTs. The reconstruction was assessed with calibration data including 83m Kr (releasing a total energy of 41.5 keV) and 3 H (andbeta; - with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of andsigma; = 0.82 cm and andsigma; = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies andsim;0.25 keV and andsim;10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.« less

  17. A Fourier-based compressed sensing technique for accelerated CT image reconstruction using first-order methods.

    PubMed

    Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei

    2014-06-21

    As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.

  18. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  19. The effects of medial ulnar collateral ligament reconstruction on Major League pitching performance.

    PubMed

    Keller, Robert A; Steffes, Matthew J; Zhuo, David; Bey, Michael J; Moutzouros, Vasilios

    2014-11-01

    Medial ulnar collateral ligament (MUCL) reconstruction is commonly performed on Major League Baseball (MLB) pitchers. Previous studies have reported that most pitchers return to presurgical statistical performance levels after MUCL reconstruction. Pitching performance data--specifically, earned run average (ERA), walks and hits per inning pitched (WHIP), winning percentage, and innings pitched--were acquired for 168 MLB pitchers who had undergone MUCL reconstruction. These data were averaged over the 3 years before surgery and the 3 years after surgery and also acquired from 178 age-matched, uninjured MLB pitchers. Of the pitchers who had MUCL reconstruction surgery, 87% returned to MLB pitching. However, compared with presurgical data, pitching performance declined in terms of ERA (P = .001), WHIP (P = .011), and innings pitched (P = .026). Pitching performance also declined in the season before the surgery compared with previous years (ERA, P = .014; WHIP, P = .036; innings pitched, P < .001; winning percentage, P = .004). Compared with age-matched control pitchers, the MUCL reconstruction pitchers had significantly more major league experience at the same age (P < .001). MUCL reconstruction allows most players to return to pitching at the major league level. However, after MUCL reconstruction, there is a statistically significant decline in pitching performance. There appears to be a statistically significant decline in pitching performance the year before reconstructive surgery, and this decline is also a risk factor for requiring surgery. In addition, there is an increased risk of MUCL reconstruction for pitchers who enter the major leagues at a younger age. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  20. The use of perturbed physics ensembles and emulation in palaeoclimate reconstruction (Invited)

    NASA Astrophysics Data System (ADS)

    Edwards, T. L.; Rougier, J.; Collins, M.

    2010-12-01

    Climate is a coherent process, with correlations and dependencies across space, time, and climate variables. However, reconstructions of palaeoclimate traditionally consider individual pieces of information independently, rather than making use of this covariance structure. Such reconstructions are at risk of being unphysical or at least implausible. Climate simulators such as General Circulation Models (GCMs), on the other hand, contain climate system theory in the form of dynamical equations describing physical processes, but are imperfect and computationally expensive. These two datasets - pointwise palaeoclimate reconstructions and climate simulator evaluations - contain complementary information, and a statistical synthesis can produce a palaeoclimate reconstruction that combines them while not ignoring their limitations. We use an ensemble of simulators with perturbed parameterisations, to capture the uncertainty about the simulator variant, and our method also accounts for structural uncertainty. The resulting reconstruction contains a full expression of climate uncertainty, not just pointwise but also jointly over locations. Such joint information is crucial in determining spatially extensive features such as isotherms, or the location of the tree-line. A second outcome of the statistical analysis is a refined distribution for the simulator parameters. In this way, information from palaeoclimate observations can be used directly in quantifying uncertainty in future climate projections. The main challenge is the expense of running a large scale climate simulator: each evaluation of an atmosphere-ocean GCM takes several months of computing time. The solution is to interpret the ensemble of evaluations within an 'emulator', which is a statistical model of the simulator. This technique has been used fruitfully in the statistical field of Computer Models for two decades, and has recently been applied in estimating uncertainty in future climate predictions in the UKCP09 (http://ukclimateprojections.defra.gov.uk). But only in the last couple of years has it developed to the point where it can be applied to large-scale spatial fields. We construct an emulator for the mid-Holocene (6000 calendar years BP) temperature anomaly over North America, at the resolution of our simulator (2.5° latitude by 3.75° longitude). This allows us to explore the behaviour of simulator variants that we could not afford to evaluate directly. We introduce the technique of 'co-emulation' of two versions of the climate simulator: the coupled atmosphere-ocean model HadCM3, and an equivalent with a simplified ocean, HadSM3. Running two different versions of a simulator is a powerful tool for increasing the information yield from a fixed budget of computer time, but the results must be combined statistically to account for the reduced fidelity of the quicker version. Emulators provide the appropriate framework.

  1. A statistical approach for inferring the 3D structure of the genome.

    PubMed

    Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe

    2014-06-15

    Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.

  2. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology

    NASA Astrophysics Data System (ADS)

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-01

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  3. Dynamic PET image reconstruction integrating temporal regularization associated with respiratory motion correction for applications in oncology.

    PubMed

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric

    2018-02-13

    Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.

  4. A long time ago, where were the galaxies far, far away?

    NASA Astrophysics Data System (ADS)

    Sirko, Edwin

    How did the universe get from then to now ? I examine this broad cosmological problem from two perspectives: forward and backward. In the forward perspective, I implement a method of generating initial conditions for N -body simulations that accurately models real-space statistical properties, such as the mass variance in spheres and the correlation function. The method requires running ensembles of simulations because the power in the DC mode is no longer assumed to be zero. For moderately sized boxes, I demonstrate that the new method corrects the previously widely ignored underestimate in the mass variance in spheres and the shape of the correlation function. In the backward perspective, I use reconstruction techniques to transform a simulated or observed cosmological density field back in time to the early universe. A simple reconstruction technique is used to sharpen the baryon acoustic peak in the correlation function in simulations. At z = 0.3, one can reduce the sample variance error bar on the acoustic scale by at least a factor of 2 and in principle by nearly a factor of 4. This has significant implications for future observational surveys aiming to measure the cosmological distance scale. Another reconstruction technique, Monge-Ampere-Kantorovich reconstruction, is used on evolved N -body simulations to calibrate its effectiveness in recovering the linear power spectrum. A new "memory model" parametrizes the evolution of Fourier modes into two parameters that describe the amount of memory a given mode retains and how much the mode has been scrambled by nonlinear evolution. Reconstruction is spectacularly successful in restoring the memory of Fourier modes and reducing the scrambling; however, the success of reconstruction is not so obvious when considering the power spectrum alone. I apply reconstruction to a volume-limited sample of galaxies from the Sloan Digital Sky Survey and conclude that linear bias is not a good model in the range 0.01 h Mpc -1 [Special characters omitted.] k [Special characters omitted.] 0.5 h Mpc -1 . The most impressive success of reconstruction applied to real data is that the confidence interval on the normalization of the power spectrum is typically halved when using the reconstructed instead of the nonlinear power spectrum.

  5. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  6. Development of methods for establishing nutrient criteria in lakes and reservoirs: A review.

    PubMed

    Huo, Shouliang; Ma, Chunzi; Xi, Beidou; Zhang, Yali; Wu, Fengchang; Liu, Hongliang

    2018-05-01

    Nutrient criteria provide a scientific foundation for the comprehensive evaluation, prevention, control and management of water eutrophication. In this review, the literature was examined to systematically evaluate the benefits, drawbacks, and applications of statistical analysis, paleolimnological reconstruction, stressor-response model, and model inference approaches for nutrient criteria determination. The developments and challenges in the determination of nutrient criteria in lakes and reservoirs are presented. Reference lakes can reflect the original states of lakes, but reference sites are often unavailable. Using the paleolimnological reconstruction method, it is often difficult to reconstruct the historical nutrient conditions of shallow lakes in which the sediments are easily disturbed. The model inference approach requires sufficient data to identify the appropriate equations and characterize a waterbody or group of waterbodies, thereby increasing the difficulty of establishing nutrient criteria. The stressor-response model is a potential development direction for nutrient criteria determination, and the mechanisms of stressor-response models should be studied further. Based on studies of the relationships among water ecological criteria, eutrophication, nutrient criteria and plankton, methods for determining nutrient criteria should be closely integrated with water management requirements. Copyright © 2017. Published by Elsevier B.V.

  7. [Near infrared reflectance spectroscopy (NIRS): a novel approach to reconstructing historical changes of primary productivity in Antarctic lake].

    PubMed

    Chen, Qian-Qian; Liu, Xiao-Dong; Liu, Wen-Qi; Jiang, Shan

    2011-10-01

    Compared with traditional chemical analysis methods, reflectance spectroscopy has the advantages of speed, minimal or no sample preparation, non-destruction, and low cost. In order to explore the potential application of spectroscopy technology in the paleolimnological study on Antarctic lakes, we took a lake sediment core in Mochou Lake at Zhongshan Station of Antarctic, and analyzed the near infrared reflectance spectroscopy (NIRS) data in the sedimentary samples. The results showed that the factor loadings of principal component analysis (PCA) displayed very similar depth-profile change pattern with the S2 index, a reliable proxy for the change in historical lake primary productivity. The correlation analysis showed that the values of PCA factor loading and S2 were correlated significantly, suggesting that it is feasible to infer paleoproductivity changes recorded in Antarctic lakes using NIRS technology. Compared to the traditional method of the trough area between 650 and 700 nm, the authors found that the PCA statistical approach was more accurate for reconstructing the change in historical lake primary productivity. The results reported here demonstrate that reflectance spectroscopy can provide a rapid method for the reconstruction of lake palaeoenviro nmental change in the remote Antarctic regions.

  8. Prediction of Knee Joint Contact Forces From External Measures Using Principal Component Prediction and Reconstruction.

    PubMed

    Saliba, Christopher M; Clouthier, Allison L; Brandon, Scott C E; Rainbow, Michael J; Deluzio, Kevin J

    2018-05-29

    Abnormal loading of the knee joint contributes to the pathogenesis of knee osteoarthritis. Gait retraining is a non-invasive intervention that aims to reduce knee loads by providing audible, visual, or haptic feedback of gait parameters. The computational expense of joint contact force prediction has limited real-time feedback to surrogate measures of the contact force, such as the knee adduction moment. We developed a method to predict knee joint contact forces using motion analysis and a statistical regression model that can be implemented in near real-time. Gait waveform variables were deconstructed using principal component analysis and a linear regression was used to predict the principal component scores of the contact force waveforms. Knee joint contact force waveforms were reconstructed using the predicted scores. We tested our method using a heterogenous population of asymptomatic controls and subjects with knee osteoarthritis. The reconstructed contact force waveforms had mean (SD) RMS differences of 0.17 (0.05) bodyweight compared to the contact forces predicted by a musculoskeletal model. Our method successfully predicted subject-specific shape features of contact force waveforms and is a potentially powerful tool in biofeedback and clinical gait analysis.

  9. Generation of dense statistical connectomes from sparse morphological data

    PubMed Central

    Egger, Robert; Dercksen, Vincent J.; Udvary, Daniel; Hege, Hans-Christian; Oberlaender, Marcel

    2014-01-01

    Sensory-evoked signal flow, at cellular and network levels, is primarily determined by the synaptic wiring of the underlying neuronal circuitry. Measurements of synaptic innervation, connection probabilities and subcellular organization of synaptic inputs are thus among the most active fields of research in contemporary neuroscience. Methods to measure these quantities range from electrophysiological recordings over reconstructions of dendrite-axon overlap at light-microscopic levels to dense circuit reconstructions of small volumes at electron-microscopic resolution. However, quantitative and complete measurements at subcellular resolution and mesoscopic scales to obtain all local and long-range synaptic in/outputs for any neuron within an entire brain region are beyond present methodological limits. Here, we present a novel concept, implemented within an interactive software environment called NeuroNet, which allows (i) integration of sparsely sampled (sub)cellular morphological data into an accurate anatomical reference frame of the brain region(s) of interest, (ii) up-scaling to generate an average dense model of the neuronal circuitry within the respective brain region(s) and (iii) statistical measurements of synaptic innervation between all neurons within the model. We illustrate our approach by generating a dense average model of the entire rat vibrissal cortex, providing the required anatomical data, and illustrate how to measure synaptic innervation statistically. Comparing our results with data from paired recordings in vitro and in vivo, as well as with reconstructions of synaptic contact sites at light- and electron-microscopic levels, we find that our in silico measurements are in line with previous results. PMID:25426033

  10. Variance analysis of x-ray CT sinograms in the presence of electronic noise background

    PubMed Central

    Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing

    2012-01-01

    Purpose: Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. Methods: The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean–variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. Results: The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. Conclusions: The influence of the electronic noise background should be considered in low-dose CT imaging. PMID:22830738

  11. Study for online range monitoring with the interaction vertex imaging method.

    PubMed

    Finck, Ch; Karakaya, Y; Reithinger, V; Rescigno, R; Baudot, J; Constanzo, J; Juliani, D; Krimmer, J; Rinaldi, I; Rousseau, M; Testa, E; Vanstalle, M; Ray, C

    2017-11-21

    Ion beam therapy enables a highly accurate dose conformation delivery to the tumor due to the finite range of charged ions in matter (i.e. Bragg peak (BP)). Consequently, the dose profile is very sensitive to patients anatomical changes as well as minor mispositioning, and so it requires improved dose control techniques. Proton interaction vertex imaging (IVI) could offer an online range control in carbon ion therapy. In this paper, a statistical method was used to study the sensitivity of the IVI technique on experimental data obtained from the Heidelberg Ion-Beam Therapy Center. The vertices of secondary protons were reconstructed with pixelized silicon detectors. The statistical study used the [Formula: see text] test of the reconstructed vertex distributions for a given displacement of the BP position as a function of the impinging carbon ions. Different phantom configurations were used with or without bone equivalent tissue and air inserts. The inflection points in the fall-off region of the longitudinal vertex distribution were computed using different methods, while the relation with the BP position was established. In the present setup, the resolution of the BP position was about 4-5 mm in the homogeneous phantom under clinical conditions (10 6 incident carbon ions). Our results show that the IVI method could therefore monitor the BP position with a promising resolution in clinical conditions.

  12. Study for online range monitoring with the interaction vertex imaging method

    NASA Astrophysics Data System (ADS)

    Finck, Ch; Karakaya, Y.; Reithinger, V.; Rescigno, R.; Baudot, J.; Constanzo, J.; Juliani, D.; Krimmer, J.; Rinaldi, I.; Rousseau, M.; Testa, E.; Vanstalle, M.; Ray, C.

    2017-12-01

    Ion beam therapy enables a highly accurate dose conformation delivery to the tumor due to the finite range of charged ions in matter (i.e. Bragg peak (BP)). Consequently, the dose profile is very sensitive to patients anatomical changes as well as minor mispositioning, and so it requires improved dose control techniques. Proton interaction vertex imaging (IVI) could offer an online range control in carbon ion therapy. In this paper, a statistical method was used to study the sensitivity of the IVI technique on experimental data obtained from the Heidelberg Ion-Beam Therapy Center. The vertices of secondary protons were reconstructed with pixelized silicon detectors. The statistical study used the χ2 test of the reconstructed vertex distributions for a given displacement of the BP position as a function of the impinging carbon ions. Different phantom configurations were used with or without bone equivalent tissue and air inserts. The inflection points in the fall-off region of the longitudinal vertex distribution were computed using different methods, while the relation with the BP position was established. In the present setup, the resolution of the BP position was about 4-5 mm in the homogeneous phantom under clinical conditions (106 incident carbon ions). Our results show that the IVI method could therefore monitor the BP position with a promising resolution in clinical conditions.

  13. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  14. Estimating topological properties of weighted networks from limited information

    NASA Astrophysics Data System (ADS)

    Gabrielli, Andrea; Cimini, Giulio; Garlaschelli, Diego; Squartini, Angelo

    A typical problem met when studying complex systems is the limited information available on their topology, which hinders our understanding of their structural and dynamical properties. A paramount example is provided by financial networks, whose data are privacy protected. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here we develop a reconstruction method, based on statistical mechanics concepts, that exploits the empirical link density in a highly non-trivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems. Acknoweledgement to ``Growthcom'' ICT - EC project (Grant No: 611272) and ``Crisislab'' Italian Project.

  15. Extension of the SIM Hydrometeorological Reanalysis Over the Entire 20th Century by Combination of Observations and Statistical Downscaling

    NASA Astrophysics Data System (ADS)

    Minvielle, M.; Céron, J.; Page, C.

    2013-12-01

    The SAFRAN-ISBA-MODCOU (SIM) system is a combination of three different components: an atmospheric analysis system (SAFRAN) providing the atmospheric forcing for a land surface model (ISBA) that computes surface water and energy budgets and a hydrological model (MODCOU) that provides river flows and level of several aquifers. The variables generated by the SIM chain constitute the SIM reanalysis and the current version only covers the 1958-2012 period. However, long climate datasets are required for evaluation and verification of climate hindcasts/forecasts and to isolate the contribution of natural decadal variability from that of anthropogenic forcing to climate variations. The aim of this work is to extend of the fine-mesh SIM reanalysis to the entire 20th century, especially focusing on temperature and rainfall over France, but also soil wetness and river flows. This extension will first allow a detailed investigation of the influence of decadal variability on France at very fine spatial scales and will provide crucial information for climate model evaluation. Before 1958, the density of available observations from Météo-France necessary to force SAFRAN (rainfall, snow, wind, temperature, humidity, cloudiness) is much lower than today, and not sufficient to produce a correct SIM reanalysis. That's why is has been decided to use the available atmospheric observations over the past decades combined to a statistical downscaling algorithm to overcome the lack of observations. The DSCLIM software package implemented by the CERFACS and using a weather typing based statistical methodology will be used as statistical downscaling method to reconstruct the atmospheric variables necessary to force the ISBA-MODCOU hydrological component. The first stage of this work was to estimate and compare the bias and strengths of the two approaches in their ability to reconstruct the past decades. In this sense, SIM hydro-meteorological experiments were performed for some recent years, with a number of observations artificially reduced to a number similar to years 1910, 1930 and 1950. Concurrently, the same recent years have been downscaled by DSCLIM and used to force ISBA-MODCOU. Afterwards, some additional experiments with some modified parameters in the DSCLIM algorithm have been performed in order to adapt the methodology to the study case, and thus trying to improve its performances. Several configurations of the DSCLIM algorithm were applied to the entire century, using the NOAA20CR reanalysis as large-scale predictor. The reconstructed atmospheric variables are compared to the available observations over the entire century to estimate the ability of the statistical downscaling method to reproduce a correct interannual to multidecadal variability. Finally, a novel method is tested: available observations over past decades are introduced in the DSCLIM algorithm, in order to obtain a reconstructed dataset as realistic as possible.

  16. Image quality improvement using model-based iterative reconstruction in low dose chest CT for children with necrotizing pneumonia.

    PubMed

    Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun

    2017-03-16

    Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.

  17. Surgeon motivations behind the timing of breast reconstruction in patients requiring postmastectomy radiation therapy.

    PubMed

    Lee, Ming; Reinertsen, Erik; McClure, Evan; Liu, Shuling; Kruper, Laura; Tanna, Neil; Brian Boyd, J; Granzow, Jay W

    2015-11-01

    Although postmastectomy radiation therapy (PMRT) has been shown to reduce breast cancer burden and improve survival, PMRT may negatively influence outcomes after reconstruction. The goal of this study was to compare current opinions of plastic and reconstructive surgeons (PRS) and surgical oncologists (SO) regarding the optimal timing of breast reconstruction for patients requiring PMRT. Members of the American Society of Plastic Surgeons (ASPS), the American Society of Breast Surgeons (ASBS), and the Society of Surgical Oncology (SSO) were asked to participate in an anonymous web-based survey. Responses were solicited in accordance to the Dillman method, and they were analyzed using standard descriptive statistics. A total of 330 members of the ASPS and 348 members of the ASBS and SSO participated in our survey. PRS and SO differed in patient-payor mix (p < 0.01) and practice setting (p < 0.01), but they did not differ by urban versus rural setting (p = 0.65) or geographic location (p = 0.30). Although PRS favored immediate reconstruction versus SO, overall timing did not significantly differ between the two specialists (p = 0.14). The primary rationale behind delayed breast reconstruction differed significantly between PRS and SO (p < 0.01), with more PRS believing that the reconstructive outcome is significantly and adversely affected by radiation. Both PRS and SO cited "patient-driven desire to have immediate reconstruction" (p = 0.86) as the primary motivation for immediate reconstruction. Although the optimal timing of reconstruction is controversial between PRS and SO, our study suggests that the timing of reconstruction in PMRT patients is ultimately driven by patient preferences and the desire of PRS to optimize aesthetic outcomes. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Variation in the Utilization of Reconstruction Following Mastectomy in Elderly Women

    PubMed Central

    In, Haejin; Jiang, Wei; Lipsitz, Stuart R.; Neville, Bridget A.; Weeks, Jane C.; Greenberg, Caprice C.

    2014-01-01

    Background Regardless of their age, women who choose to undergo postmastectomy reconstruction report improved quality of life as a result. However, actual use of reconstruction decreases with increasing age. Whereas this may reflect patient preference and clinical factors, it may also represent age-based disparity. Methods Women aged 65 years or older who underwent mastectomy for DCIS/stage I/II breast cancer (2000–2005) were identified in the SEER-Medicare database. Overall and institutional rates of reconstruction were calculated. Characteristics of hospitals with higher and lower rates of reconstruction were compared. Pseudo-R2 statistics utilizing a patient-level logistic regression model estimated the relative contribution of institution and patient characteristics. Results A total of 19,234 patients at 716 institutions were examined. Overall, 6 % of elderly patients received reconstruction after mastectomy. Institutional rates ranged from zero to >40 %. Whereas 53 % of institutions performed no reconstruction on elderly patients, 5.6 % performed reconstructions on more than 20 %. Although patient characteristics (%ΔR2 = 70 %), and especially age (%ΔR2 = 34 %), were the primary determinants of reconstruction, institutional characteristics also explained some of the variation (%ΔR2 = 16 %). This suggests that in addition to appropriate factors, including clinical characteristics and patient preferences, the use of reconstruction among older women also is influenced by the institution at which they receive care. Conclusions Variation in the likelihood of reconstruction by institution and the association with structural characteristics suggests unequal access to this critical component of breast cancer care. Increased awareness of a potential age disparity is an important first step to improve access for elderly women who are candidates and desire reconstruction. PMID:23263733

  19. Reconstructing the ideal results of a perturbed analog quantum simulator

    NASA Astrophysics Data System (ADS)

    Schwenk, Iris; Reiner, Jan-Michael; Zanker, Sebastian; Tian, Lin; Leppäkangas, Juha; Marthaler, Michael

    2018-04-01

    Well-controlled quantum systems can potentially be used as quantum simulators. However, a quantum simulator is inevitably perturbed by coupling to additional degrees of freedom. This constitutes a major roadblock to useful quantum simulations. So far there are only limited means to understand the effect of perturbation on the results of quantum simulation. Here we present a method which, in certain circumstances, allows for the reconstruction of the ideal result from measurements on a perturbed quantum simulator. We consider extracting the value of the correlator 〈Ôi(t ) Ôj(0 ) 〉 from the simulated system, where Ôi are the operators which couple the system to its environment. The ideal correlator can be straightforwardly reconstructed by using statistical knowledge of the environment, if any n -time correlator of operators Ôi of the ideal system can be written as products of two-time correlators. We give an approach to verify the validity of this assumption experimentally by additional measurements on the perturbed quantum simulator. The proposed method can allow for reliable quantum simulations with systems subjected to environmental noise without adding an overhead to the quantum system.

  20. 3D reconstruction modeling of bulk heterojunction organic photovoltaic cells: Effect of the complexity of the boundary on the morphology

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Jin; Jeong, Daun; Kim, SeongMin; Choi, Yeong Suk; Ihn, Soo-Ghang; Yun, Sungyoung; Lim, Younhee; Lee, Eunha; Park, Gyeong-Su

    2016-02-01

    Although the morphology of the active layer in bulk heterojunction organic photovoltaic (BHJ-OPV) cells is critical for determining the quantum efficiency (QE), predicting the real QE for a 3-dimensional (3D) morphology has long been difficult because structural information on the composition complexity of donor (D): acceptor (A) blends with small domain size is limited to 2D observations via various image-processing techniques. To overcome this, we reconstruct the 3D morphology by using an isotropic statistical approach based on 2D energy-filtered transmission electron microscopy (EF-TEM) images. This new reconstruction method is validated to obtain the internal QE by using a dynamic Monte Carlo simulation in the BHJ-OPV system with different additives such as 4 vol% 1-chloronaphthalene (CN) and 4 vol% 1,8-diiodooctane (DIO) (compared to the case of no additive); the resulting trend is compared with the experimental QE. Therefore, our developed method can be used to predict the real charge transport performance in the OPV system accurately.

  1. Trends in primary and revision anterior cruciate ligament reconstruction among National Basketball Association team physicians.

    PubMed

    Mall, Nathan A; Abrams, Geoffrey D; Azar, Frederick M; Traina, Steve M; Allen, Answorth A; Parker, Richard; Cole, Brian J

    2014-06-01

    Anterior cruciate ligament (ACL) tears are common in athletes. Techniques and methods of treatment for these injuries continue to vary among surgeons. Thirty National Basketball Association (NBA) team physicians were surveyed during the NBA Pre-Draft Combine. Survey questions involved current and previous practice methods of primary and revision ACL reconstruction, including technique, graft choice, rehabilitation, and treatment of combined ACL and medial collateral ligament injuries. Descriptive parametric statistics, Fisher exact test, and logistic regression were used, and significance was set at α = 0.05. All 30 team physicians completed the survey. Eighty-seven percent indicated they use autograft (81% bone-patellar tendon-bone) for primary ACL reconstruction in NBA athletes, and 43% indicated they use autograft for revision cases. Fourteen surgeons (47%) indicated they use an anteromedial portal (AMP) for femoral tunnel drilling, whereas 5 years earlier only 4 (13%) used this technique. There was a significant (P = .009) positive correlation between fewer years in practice and AMP use. NBA team physicians' use of an AMP for femoral tunnel drilling has increased over the past 5 years.

  2. A New Approach for 3D Ocean Reconstruction from Limited Observations

    NASA Astrophysics Data System (ADS)

    Xiao, X.

    2014-12-01

    Satellites can measure ocean surface height and temperature with sufficient spatial and temporal resolution to capture mesoscale features across the globe. Measurements of the ocean's interior, however, remain sparse and irregular, thus the dynamical inference of subsurface flows is necessary to interpret surface measurements. The most common (and accurate) approach is to incorporate surface measurements into a data-assimilating forward ocean model, but this approach is expensive and slow, and thus completely impractical for time-critical needs, such as offering guidance to ship-based observational campaigns. Two recently-developed approaches have made use of the apparent partial consistency of upper ocean dynamics with quasigeostrophic flows that take into account surface buoyancy gradients (i.e. the "surface quasigeostrophic" (SQG) model) to "reconstruct" the interior flow from knowledge of surface height and buoyancy. Here we improve on these methods in three ways: (1) we adopt a modal decomposition that represents the surface and interior dynamics in an efficient way, allowing the separation of surface energy from total energy; (2) we make use of instantaneous vertical profile observations (e.g. from ARGO data) to improve the reconstruction of eddy variables at depth; and (3) we use advanced statistical methods to choose the optimal modes for the reconstruction. The method is tested using a series of high horizontal and vertical resolution quasigeostrophic simulation, with a wide range of surface buoyancy and interior potential vorticity gradient combinations. In addtion, we apply the method to output from a very high resolution primitive equation simulation of a forced and dissipated baroclinic front in a channel. Our new method is systematically compared to the existing methods as well. Its advantages and limitations will be discussed.

  3. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  4. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  5. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    PubMed

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  6. Biomechanical comparison of graft fixation at 30° and 90° of elbow flexion for ulnar collateral ligament reconstruction by the docking technique.

    PubMed

    Cohen, Steven B; Woods, Daniel P; Siegler, Sorin; Dodson, Christopher C; Namani, Ramya; Ciccotti, Michael G

    2015-02-01

    Ulnar collateral ligament (UCL) injuries have been successfully treated by the docking reconstruction. Although fixation of the graft has been suggested at 30° of elbow flexion, no quantitative biomechanical data exist to provide guidelines for the optimal elbow flexion angle for graft fixation. Testing was conducted on 10 matched pairs of cadaver elbows with use of a loading system and optoelectric tracking device. After biomechanical data on the native UCL were obtained, reconstruction by the docking technique was performed with use of palmaris longus autograft with one elbow fixated at 30° and the contralateral elbow at 90° of elbow flexion. Biomechanical testing was undertaken on these specimens. The load to failure of the native UCL (mean, 20.1 N-m) was significantly higher (P = .004) than that of the reconstructed UCL (mean, 4.6 N-m). There was no statistically significant difference in load to failure of the UCL reconstructions fixated at 30° of elbow flexion (average, 4.86 N-m) compared with those at 90° (average, 4.35 N-m). Elbows reconstructed at 30° and 90° of elbow flexion produced similar kinematic coupling and valgus laxity characteristics compared with each other and with the intact UCL. Although not statistically significant, the reconstructions fixated at 30° more closely resembled the biomechanical characteristics of the intact elbow than did reconstructions fixated at 90°. No statistically significant difference was found in comparing the docking technique of UCL reconstruction with graft fixation at 30° vs. 90° of elbow flexion. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  7. Dose reduction with adaptive statistical iterative reconstruction for paediatric CT: phantom study and clinical experience on chest and abdomen CT.

    PubMed

    Gay, F; Pavia, Y; Pierrat, N; Lasalle, S; Neuenschwander, S; Brisse, H J

    2014-01-01

    To assess the benefit and limits of iterative reconstruction of paediatric chest and abdominal computed tomography (CT). The study compared adaptive statistical iterative reconstruction (ASIR) with filtered back projection (FBP) on 64-channel MDCT. A phantom study was first performed using variable tube potential, tube current and ASIR settings. The assessed image quality indices were the signal-to-noise ratio (SNR), the noise power spectrum, low contrast detectability (LCD) and spatial resolution. A clinical retrospective study of 26 children (M:F = 14/12, mean age: 4 years, range: 1-9 years) was secondarily performed allowing comparison of 18 chest and 14 abdominal CT pairs, one with a routine CT dose and FBP reconstruction, and the other with 30 % lower dose and 40 % ASIR reconstruction. Two radiologists independently compared the images for overall image quality, noise, sharpness and artefacts, and measured image noise. The phantom study demonstrated a significant increase in SNR without impairment of the LCD or spatial resolution, except for tube current values below 30-50 mA. On clinical images, no significant difference was observed between FBP and reduced dose ASIR images. Iterative reconstruction allows at least 30 % dose reduction in paediatric chest and abdominal CT, without impairment of image quality. • Iterative reconstruction helps lower radiation exposure levels in children undergoing CT. • Adaptive statistical iterative reconstruction (ASIR) significantly increases SNR without impairing spatial resolution. • For abdomen and chest CT, ASIR allows at least a 30 % dose reduction.

  8. Image quality of multiplanar reconstruction of pulmonary CT scans using adaptive statistical iterative reconstruction

    PubMed Central

    Honda, O; Yanagawa, M; Inoue, A; Kikuyama, A; Yoshida, S; Sumikawa, H; Tobino, K; Koyama, M; Tomiyama, N

    2011-01-01

    Objective We investigated the image quality of multiplanar reconstruction (MPR) using adaptive statistical iterative reconstruction (ASIR). Methods Inflated and fixed lungs were scanned with a garnet detector CT in high-resolution mode (HR mode) or non-high-resolution (HR) mode, and MPR images were then reconstructed. Observers compared 15 MPR images of ASIR (40%) and ASIR (80%) with those of ASIR (0%), and assessed image quality using a visual five-point scale (1, definitely inferior; 5, definitely superior), with particular emphasis on normal pulmonary structures, artefacts, noise and overall image quality. Results The mean overall image quality scores in HR mode were 3.67 with ASIR (40%) and 4.97 with ASIR (80%). Those in non-HR mode were 3.27 with ASIR (40%) and 3.90 with ASIR (80%). The mean artefact scores in HR mode were 3.13 with ASIR (40%) and 3.63 with ASIR (80%), but those in non-HR mode were 2.87 with ASIR (40%) and 2.53 with ASIR (80%). The mean scores of the other parameters were greater than 3, whereas those in HR mode were higher than those in non-HR mode. There were significant differences between ASIR (40%) and ASIR (80%) in overall image quality (p<0.01). Contrast medium in the injection syringe was scanned to analyse image quality; ASIR did not suppress the severe artefacts of contrast medium. Conclusion In general, MPR image quality with ASIR (80%) was superior to that with ASIR (40%). However, there was an increased incidence of artefacts by ASIR when CT images were obtained in non-HR mode. PMID:21081572

  9. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  10. Performance outcomes of anterior cruciate ligament reconstruction in the National Basketball Association.

    PubMed

    Busfield, Benjamin T; Kharrazi, F Daniel; Starkey, Chad; Lombardo, Stephen J; Seegmiller, Jeffrey

    2009-08-01

    The purpose of this study was to determine the rate of return to play and to quantify the effect on the basketball player's performance after surgical reconstruction of the anterior cruciate ligament (ACL). Surgical injuries involving the ACL were queried for a 10-year period (1993-1994 season through 2004-2005 season) from the database maintained by the National Basketball Association (NBA). Standard statistical categories and player efficiency rating (PER), a measure that accounts for positive and negative playing statistics, were calculated to determine the impact of the injury on player performance relative to a matched comparison group. Over the study period, 31 NBA players had 32 ACL reconstructions. Two patients were excluded because of multiple ACL injuries, one was excluded because he never participated in league play, and another was the result of nonathletic activity. Of the 27 players in the study group, 6 (22%) did not return to NBA competition. Of the 21 players (78%) who did return to play, 4 (15%) had an increase in the preinjury PER, 5 (19%) remained within 1 point of the preinjury PER, and the PER decreased by more than 1 point after return to play in 12 (44%). Although decreases occurred in most of the statistical categories for players returning from ACL surgery, the number of games played, field goal percentage, and number of turnovers per game were the only categories with a statistically significant decrease. Players in the comparison group had a statistically significant increase in the PER over their careers, whereas the study group had a marked, though not statistically significant, increase in the PER in the season after reconstruction. After ACL reconstruction in 27 basketball players, 22% did not return to a sanctioned NBA game. For those returning to play, performance decreased by more than 1 PER point in 44% of the patients, although the changes were not statistically significant relative to the comparison group. Level IV, therapeutic case series.

  11. Phylogeography Takes a Relaxed Random Walk in Continuous Space and Time

    PubMed Central

    Lemey, Philippe; Rambaut, Andrew; Welch, John J.; Suchard, Marc A.

    2010-01-01

    Research aimed at understanding the geographic context of evolutionary histories is burgeoning across biological disciplines. Recent endeavors attempt to interpret contemporaneous genetic variation in the light of increasingly detailed geographical and environmental observations. Such interest has promoted the development of phylogeographic inference techniques that explicitly aim to integrate such heterogeneous data. One promising development involves reconstructing phylogeographic history on a continuous landscape. Here, we present a Bayesian statistical approach to infer continuous phylogeographic diffusion using random walk models while simultaneously reconstructing the evolutionary history in time from molecular sequence data. Moreover, by accommodating branch-specific variation in dispersal rates, we relax the most restrictive assumption of the standard Brownian diffusion process and demonstrate increased statistical efficiency in spatial reconstructions of overdispersed random walks by analyzing both simulated and real viral genetic data. We further illustrate how drawing inference about summary statistics from a fully specified stochastic process over both sequence evolution and spatial movement reveals important characteristics of a rabies epidemic. Together with recent advances in discrete phylogeographic inference, the continuous model developments furnish a flexible statistical framework for biogeographical reconstructions that is easily expanded upon to accommodate various landscape genetic features. PMID:20203288

  12. Surgical Reconstruction with the Remnant Ligament Improves Joint Position Sense as well as Functional Ankle Instability: A 1-Year Follow-Up Study

    PubMed Central

    Iwao, Kamizato; Masataka, Deie; Kohei, Fukuhara

    2014-01-01

    Introduction. Chronic functional instability—characterized by repeated ankle inversion sprains and a subjective sensation of instability—is one of the most common residual disabilities after an inversion sprain. However, whether surgical reconstruction improves sensorimotor control has not been reported to date. The purpose of this study was to assess functional improvement of chronic ankle instability after surgical reconstruction using the remnant ligament. Materials and Methods. We performed 10 cases in the intervention group and 20 healthy individuals as the control group. Before and after surgical reconstruction, we evaluated joint position sense and functional ankle instability by means of a questionnaire. Results and Discussion. There was a statistically significant difference between the control and intervention groups before surgical reconstruction. Three months after surgery in the intervention group, the joint position sense was significantly different from those found preoperatively. Before surgery, the mean score of functional ankle instability in the intervention group was almost twice as low. Three months after surgery, however, the score significantly increased. The results showed that surgical reconstruction using the remnant ligament was effective not only for improving mechanical retensioning but also for ameliorating joint position sense and functional ankle instability. PMID:25401146

  13. Inelastic Single Pion Signal Study in T2K νe Appearance using Modified Decay Electron Cut

    NASA Astrophysics Data System (ADS)

    Iwamoto, Konosuke; T2K Collaboration

    2015-04-01

    The T2K long-baseline neutrino experiment uses sophisticated selection criteria to identify the neutrino oscillation signals among the events reconstructed in the Super-Kamiokande (SK) detector for νe and νμ appearance and disappearance analyses. In current analyses, charged-current quasi-elastic (CCQE) events are used as the signal reaction in the SK detector because the energy can be precisely reconstructed. This talk presents an approach to increase the statistics of the oscillation analysis by including non-CCQE events with one Michel electron and reconstruct them as the inelastic single pion productions. The increase in statistics, backgrounds to this new process and energy reconstruction implications will be presented with this increased event sample.

  14. [Reliability of three dimensional resin model by rapid prototyping manufacturing and digital modeling].

    PubMed

    Zeng, Fei-huang; Xu, Yuan-zhi; Fang, Li; Tang, Xiao-shan

    2012-02-01

    To describe a new technique for fabricating an 3D resin model by 3D reconstruction and rapid prototyping, and to analyze the precision of this method. An optical grating scanner was used to acquire the data of silastic cavity block , digital dental cast was reconstructed with the data through Geomagic Studio image processing software. The final 3D reconstruction was saved in the pattern of Stl. The 3D resin model was fabricated by fuse deposition modeling, and was compared with the digital model and gypsum model. The data of three groups were statistically analyzed using SPSS 16.0 software package. No significant difference was found in gypsum model,digital dental cast and 3D resin model (P>0.05). Rapid prototyping manufacturing and digital modeling would be helpful for dental information acquisition, treatment design, appliance manufacturing, and can improve the communications between patients and doctors.

  15. Method for simulating atmospheric turbulence phase effects for multiple time slices and anisoplanatic conditions.

    PubMed

    Roggemann, M C; Welsh, B M; Montera, D; Rhoadarmer, T A

    1995-07-10

    Simulating the effects of atmospheric turbulence on optical imaging systems is an important aspect of understanding the performance of these systems. Simulations are particularly important for understanding the statistics of some adaptive-optics system performance measures, such as the mean and variance of the compensated optical transfer function, and for understanding the statistics of estimators used to reconstruct intensity distributions from turbulence-corrupted image measurements. Current methods of simulating the performance of these systems typically make use of random phase screens placed in the system pupil. Methods exist for making random draws of phase screens that have the correct spatial statistics. However, simulating temporal effects and anisoplanatism requires one or more phase screens at different distances from the aperture, possibly moving with different velocities. We describe and demonstrate a method for creating random draws of phase screens with the correct space-time statistics for a bitrary turbulence and wind-velocity profiles, which can be placed in the telescope pupil in simulations. Results are provided for both the von Kármán and the Kolmogorov turbulence spectra. We also show how to simulate anisoplanatic effects with this technique.

  16. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurz, Christopher, E-mail: Christopher.Kurz@physik.uni-muenchen.de; Bauer, Julia; Conti, Maurizio

    Purpose: External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β{sup +}-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small numbermore » of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. Methods: The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Results: Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80 000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% − 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Conclusions: Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.« less

  17. Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms

    NASA Astrophysics Data System (ADS)

    Baluev, R. V.

    2018-04-01

    Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.

  18. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  19. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    NASA Astrophysics Data System (ADS)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  20. Investigation of statistical iterative reconstruction for dedicated breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeev, Andrey; Glick, Stephen J.

    2013-08-15

    Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images weremore » compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose.Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose.« less

  1. Reconstruction Error and Principal Component Based Anomaly Detection in Hyperspectral Imagery

    DTIC Science & Technology

    2014-03-27

    2003), and (Jackson D. A., 1993). In 1933, Hotelling ( Hotelling , 1933), who coined the term ‘principal components,’ surmised that there was a...goodness of fit and multivariate quality control with the statistic Qi = (Xi(1×p) − X̂i(1×p) )(Xi(1×p) − X̂i(1×p) ) T (20) where, under the...sparsely targeted scenes through SNR or other methods. 5) Customize sorting and histogram construction methods in Multiple PCA to avoid redundancy

  2. Sensor data validation and reconstruction. Phase 1: System architecture study

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language.

  3. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  4. Mueller-matrix mapping of biological tissues in differential diagnosis of optical anisotropy mechanisms of protein networks

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2015-03-01

    We report a model of Mueller-matrix description of optical anisotropy of protein networks in biological tissues with allowance for the linear birefringence and dichroism. The model is used to construct the reconstruction algorithms of coordinate distributions of phase shifts and the linear dichroism coefficient. In the statistical analysis of such distributions, we have found the objective criteria of differentiation between benign and malignant tissues of the female reproductive system. From the standpoint of evidence-based medicine, we have determined the operating characteristics (sensitivity, specificity and accuracy) of the Mueller-matrix reconstruction method of optical anisotropy parameters and demonstrated its effectiveness in the differentiation of benign and malignant tumours.

  5. Azimuth-invariant mueller-matrix differentiation of the optical anisotropy of biological tissues

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu. F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2014-07-01

    A Mueller-matrix model is proposed for analysis of the optical anisotropy of protein networks of optically thin nondepolarizing layers of biological tissues with allowance for birefringence and dichroism. The model is used to construct algorithms for reconstruction of coordinate distributions of phase shifts and coefficient of linear dichroism. Objective criteria for differentiation of benign and malignant tissues of female genitals are formulated in the framework of the statistical analysis of such distributions. Approaches of evidence-based medicine are used to determine the working characteristics (sensitivity, specificity, and accuracy) of the Mueller-matrix method for the reconstruction of the parameters of optical anisotropy and show its efficiency in the differentiation of benign and malignant tumors.

  6. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  7. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks.

    PubMed

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  8. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  9. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  10. The Scarless Latissimus Dorsi Flap Provides Effective Lower Pole Prosthetic Coverage in Breast Reconstruction

    PubMed Central

    Miteff, Kirstin G.

    2014-01-01

    Background: The evolution of surgical breast cancer treatment has led to the oncologically safe preservation of greater amounts of native skin, yet we are still often using flaps with large skin paddles, thereby resulting in significant donor-site scars. This explains the increasing appeal of acellular dermal matrix reconstructions. Acellular dermal matrices can, however, have significant problems, particularly if there is any vascular compromise of the mastectomy skin flaps. We have developed a method of raising the latissimus dorsi flap through the anterior mastectomy incisions without requiring special instruments or repositioning. This can provide autologous vascularized cover of the prosthesis. Methods: A clear surgical description of the scarless latissimus dorsi flap harvest is provided, and our results of a retrospective cohort review of 20 consecutive patients with 27 traditional latissimus dorsi breast reconstructions were compared with those of 20 consecutive patients with 30 scarless latissimus dorsi breast reconstructions. Results: Operative time, length of stay, and complication rates were reduced in the scarless group. Patients Breast-Q scores were equivalent in each group. The aesthetic assessment was good/excellent in 77% of both groups; however, subscale assessment was better in the scarless group. This was statistically significant (P = 0.0). Conclusions: Breast reconstruction using the scarless latissimus dorsi flap is time effective, requires no patient repositioning, and uses standard breast instrumentation. It is safe and versatile while reducing the risk of exposed prosthesis if native skin necrosis occurs. It is a vascularized alternative to acellular dermal matrices. PMID:25289340

  11. Nature of Driving Force for Protein Folding: A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1997-07-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan derived a 20×20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the Miyazawa-Jernigan matrix can be accurately reconstructed from its first two principal component vectors as Mij = C0+C1\\(qi+qj\\)+C2qiqj, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  12. An Analysis Methodology for the Gamma-ray Large Area Space Telescope

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Cohen-Tanugi, Johann

    2004-01-01

    The Large Area Telescope (LAT) instrument on the Gamma Ray Large Area Space Telescope (GLAST) has been designed to detect high-energy gamma rays and determine their direction of incidence and energy. We propose a reconstruction algorithm based on recent advances in statistical methodology. This method, alternative to the standard event analysis inherited from high energy collider physics experiments, incorporates more accurately the physical processes occurring in the detector, and makes full use of the statistical information available. It could thus provide a better estimate of the direction and energy of the primary photon.

  13. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT.

    PubMed

    Chen, Guang-Hong; Li, Yinsheng

    2015-08-01

    In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.

  14. Percolation Analysis as a Tool to Describe the Topology of the Large Scale Structure of the Universe

    NASA Astrophysics Data System (ADS)

    Yess, Capp D.

    1997-09-01

    Percolation analysis is the study of the properties of clusters. In cosmology, it is the statistics of the size and number of clusters. This thesis presents a refinement of percolation analysis and its application to astronomical data. An overview of the standard model of the universe and the development of large scale structure is presented in order to place the study in historical and scientific context. Then using percolation statistics we, for the first time, demonstrate the universal character of a network pattern in the real space, mass distributions resulting from nonlinear gravitational instability of initial Gaussian fluctuations. We also find that the maximum of the number of clusters statistic in the evolved, nonlinear distributions is determined by the effective slope of the power spectrum. Next, we present percolation analyses of Wiener Reconstructions of the IRAS 1.2 Jy Redshift Survey. There are ten reconstructions of galaxy density fields in real space spanning the range β = 0.1 to 1.0, where β=Ω0.6/b,/ Ω is the present dimensionless density and b is the linear bias factor. Our method uses the growth of the largest cluster statistic to characterize the topology of a density field, where Gaussian randomized versions of the reconstructions are used as standards for analysis. For the reconstruction volume of radius, R≈100h-1 Mpc, percolation analysis reveals a slight 'meatball' topology for the real space, galaxy distribution of the IRAS survey. Finally, we employ a percolation technique developed for pointwise distributions to analyze two-dimensional projections of the three northern and three southern slices in the Las Campanas Redshift Survey and then give consideration to further study of the methodology, errors and application of percolation. We track the growth of the largest cluster as a topological indicator to a depth of 400 h-1 Mpc, and report an unambiguous signal, with high signal-to-noise ratio, indicating a network topology which in two dimensions is indicative of a filamentary distribution. It is hoped that one day percolation analysis can characterize the structure of the universe to a degree that will aid theorists in confidently describing the nature of our world.

  15. SU-E-T-427: Cell Surviving Fractions Derived From Tumor-Volume Variation During Radiotherapy for Non-Small Cell Lung Cancer: Comparison with Predictive Assays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Schwartz, J; Mayr, N

    2014-06-01

    Purpose: To show that a distribution of cell surviving fractions S{sub 2} in a heterogeneous group of patients can be derived from tumor-volume variation curves during radiotherapy for non-small cell lung cancer. Methods: Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancer with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage (MV) computed tomography (CT). Statistical distributions of cell surviving fractions S{sup 2} and cell clearance half-lives of lethally damaged cells T1/2 have been reconstructed in eachmore » patient group by using a version of the two-level cell population tumor response model and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Results: Non-small cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractions S{sub 2} for non-small cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S{sup 2} reconstructed from tumor volume variation agree with the PDF measured in vitro. Comparison of the reconstructed cell surviving fractions with patient survival data shows that the patient survival time decreases as the cell surviving fraction increases. Conclusion: The data obtained in this work suggests that the cell surviving fractions S{sub 2} can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.« less

  16. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lauzier, Pascal Theriault; Tang Jie; Speidel, Michael A.

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise andmore » streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution of noise. This spatial nonuniformity led to large fluctuations in the temporal direction. In the numerical phantom study, the level of noise was shown to vary by as much as 87% within a given image, and as much as 110% between different time frames for a ROI far from isocenter. The spatially nonuniform noise pattern was shown to correlate with the source trajectory and the object structure. In contrast, images reconstructed using SIR showed a highly uniform spatial distribution of noise, leading to smaller unexpected noise fluctuations in the temporal direction when a short scan angular range was used. In the numerical phantom study, the noise varied by less than 37% within a given image, and by less than 20% between different time frames. Also, the noise standard deviation in SIR images was on average half of that of FBP images. In the in vivo studies, the deviation observed between quantitative perfusion metrics measured from low-dose scans and high-dose scans was mitigated when SIR was used instead of FBP to reconstruct images. Conclusions: (1) Images reconstructed using FBP suffered from nonuniform spatial noise levels. This nonuniformity is another manifestation of the detrimental effects caused by short-scan reconstruction in CT MPI. (2) Images reconstructed using SIR had a much lower and more uniform noise level and thus can be used as a potential solution to address the FBP nonuniformity. (3) Given the improvement in the accuracy of the perfusion metrics when using SIR, it may be desirable to use a statistical reconstruction framework to perform low-dose dynamic CT MPI.« less

  17. Noise spatial nonuniformity and the impact of statistical image reconstruction in CT myocardial perfusion imaging

    PubMed Central

    Lauzier, Pascal Thériault; Tang, Jie; Speidel, Michael A.; Chen, Guang-Hong

    2012-01-01

    Purpose: To achieve high temporal resolution in CT myocardial perfusion imaging (MPI), images are often reconstructed using filtered backprojection (FBP) algorithms from data acquired within a short-scan angular range. However, the variation in the central angle from one time frame to the next in gated short scans has been shown to create detrimental partial scan artifacts when performing quantitative MPI measurements. This study has two main purposes. (1) To demonstrate the existence of a distinct detrimental effect in short-scan FBP, i.e., the introduction of a nonuniform spatial image noise distribution; this nonuniformity can lead to unexpectedly high image noise and streaking artifacts, which may affect CT MPI quantification. (2) To demonstrate that statistical image reconstruction (SIR) algorithms can be a potential solution to address the nonuniform spatial noise distribution problem and can also lead to radiation dose reduction in the context of CT MPI. Methods: Projection datasets from a numerically simulated perfusion phantom and an in vivo animal myocardial perfusion CT scan were used in this study. In the numerical phantom, multiple realizations of Poisson noise were added to projection data at each time frame to investigate the spatial distribution of noise. Images from all datasets were reconstructed using both FBP and SIR reconstruction algorithms. To quantify the spatial distribution of noise, the mean and standard deviation were measured in several regions of interest (ROIs) and analyzed across time frames. In the in vivo study, two low-dose scans at tube currents of 25 and 50 mA were reconstructed using FBP and SIR. Quantitative perfusion metrics, namely, the normalized upslope (NUS), myocardial blood volume (MBV), and first moment transit time (FMT), were measured for two ROIs and compared to reference values obtained from a high-dose scan performed at 500 mA. Results: Images reconstructed using FBP showed a highly nonuniform spatial distribution of noise. This spatial nonuniformity led to large fluctuations in the temporal direction. In the numerical phantom study, the level of noise was shown to vary by as much as 87% within a given image, and as much as 110% between different time frames for a ROI far from isocenter. The spatially nonuniform noise pattern was shown to correlate with the source trajectory and the object structure. In contrast, images reconstructed using SIR showed a highly uniform spatial distribution of noise, leading to smaller unexpected noise fluctuations in the temporal direction when a short scan angular range was used. In the numerical phantom study, the noise varied by less than 37% within a given image, and by less than 20% between different time frames. Also, the noise standard deviation in SIR images was on average half of that of FBP images. In the in vivo studies, the deviation observed between quantitative perfusion metrics measured from low-dose scans and high-dose scans was mitigated when SIR was used instead of FBP to reconstruct images. Conclusions: (1) Images reconstructed using FBP suffered from nonuniform spatial noise levels. This nonuniformity is another manifestation of the detrimental effects caused by short-scan reconstruction in CT MPI. (2) Images reconstructed using SIR had a much lower and more uniform noise level and thus can be used as a potential solution to address the FBP nonuniformity. (3) Given the improvement in the accuracy of the perfusion metrics when using SIR, it may be desirable to use a statistical reconstruction framework to perform low-dose dynamic CT MPI. PMID:22830741

  18. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  19. A multispecies tree ring reconstruction of Potomac River streamflow (950-2001)

    NASA Astrophysics Data System (ADS)

    Maxwell, R. Stockton; Hessl, Amy E.; Cook, Edward R.; Pederson, Neil

    2011-05-01

    Mean May-September Potomac River streamflow was reconstructed from 950-2001 using a network of tree ring chronologies (n = 27) representing multiple species. We chose a nested principal components reconstruction method to maximize use of available chronologies backward in time. Explained variance during the period of calibration ranged from 20% to 53% depending on the number and species of chronologies available in each 25 year time step. The model was verified by two goodness of fit tests, the coefficient of efficiency (CE) and the reduction of error statistic (RE). The RE and CE never fell below zero, suggesting the model had explanatory power over the entire period of reconstruction. Beta weights indicated a loss of explained variance during the 1550-1700 period that we hypothesize was caused by the reduction in total number of predictor chronologies and loss of important predictor species. Thus, the reconstruction is strongest from 1700-2001. Frequency, intensity, and duration of drought and pluvial events were examined to aid water resource managers. We found that the instrumental period did not represent adequately the full range of annual to multidecadal variability present in the reconstruction. Our reconstruction of mean May-September Potomac River streamflow was a significant improvement over the Cook and Jacoby (1983) reconstruction because it expanded the seasonal window, lengthened the record by 780 years, and better replicated the mean and variance of the instrumental record. By capitalizing on variable phenologies and tree growth responses to climate, multispecies reconstructions may provide significantly more information about past hydroclimate, especially in regions with low aridity and high tree species diversity.

  20. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  1. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  2. Towards extensive spatio-temporal reconstructions of North American land cover: a comparison of state-of-the-art pollen-vegetation models

    NASA Astrophysics Data System (ADS)

    Dawson, A.; Trachsel, M.; Goring, S. J.; Paciorek, C. J.; McLachlan, J. S.; Jackson, S. T.; Williams, J. W.

    2017-12-01

    Pollen records have been extensively used to reconstruct past changes in vegetation and study the underlying processes. However, developing the statistical techniques needed to accurately represent both data and process uncertainties is a formidable challenge. Recent advances in paleoecoinformatics (e.g. the Neotoma Paleoecology Database and the European Pollen Database), Bayesian age-depth models, and process-based pollen-vegetation models, and Bayesian hierarchical modeling have pushed paleovegetation reconstructions forward to a point where multiple sources of uncertainty can be incorporated into reconstructions, which in turn enables new hypotheses to be asked and more rigorous integration of paleovegetation data with earth system models and terrestrial ecosystem models. Several kinds of pollen-vegetation models have been developed, notably LOVE/REVEALS, STEPPS, and classical transfer functions such as the modern analog technique. LOVE/REVEALS has been adopted as the standard method for the LandCover6k effort to develop quantitative reconstructions of land cover for the Holocene, while STEPPS has been developed recently as part of the PalEON project and applied to reconstruct with uncertainty shifts in forest composition in New England and the upper Midwest during the late Holocene. Each PVM has different assumptions and structure and uses different input data, but few comparisons among approaches yet exist. Here, we present new reconstructions of land cover change in northern North America during the Holocene based on LOVE/REVEALS and data drawn from the Neotoma database and compare STEPPS-based reconstructions to those from LOVE/REVEALS. These parallel developments with LOVE/REVEALS provide an opportunity to compare and contrast models, and to begin to generate continental scale reconstructions, with explicit uncertainties, that can provide a base for interdisciplinary research within the biogeosciences. We show how STEPPS provides an important benchmark for past land-cover reconstruction, and how the LandCover 6k effort in North America advances our understanding of the past by allowing cross-continent comparisons using standardized methods and quantifying the impact of humans in the early Anthropocene.

  3. Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Adam S.; Webster Stayman, J.; Otake, Yoshito; Kleinszig, Gerhard; Vogt, Sebastian; Gallia, Gary L.; Khanna, A. Jay; Siewerdsen, Jeffrey H.

    2014-02-01

    The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (˜40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2× over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ˜1.7 mGy and benefits from 50% sparsity at dose below ˜1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.

  4. Dual Energy CT (DECT) Monochromatic Imaging: Added Value of Adaptive Statistical Iterative Reconstructions (ASIR) in Portal Venography

    PubMed Central

    Winklhofer, Sebastian; Jiang, Rong; Wang, Xinlian; He, Wen

    2016-01-01

    Objective To investigate the effect of the adaptive statistical iterative reconstructions (ASIR) on image quality in portal venography by dual energy CT (DECT) imaging. Materials and Methods DECT scans of 45 cirrhotic patients obtained in the portal venous phase were analyzed. Monochromatic images at 70keV were reconstructed with the following 4 ASIR percentages: 0%, 30%, 50%, and 70%. The image noise (IN) (standard deviation, SD) of portal vein (PV), the contrast-to-noise-ratio (CNR), and the subjective score for the sharpness of PV boundaries, and the diagnostic acceptability (DA) were obtained. The IN, CNR, and the subjective scores were compared among the four ASIR groups. Results The IN (in HU) of PV (10.05±3.14, 9.23±3.05, 8.44±2.95 and 7.83±2.90) decreased and CNR values of PV (8.04±3.32, 8.95±3.63, 9.80±4.12 and 10.74±4.73) increased with the increase in ASIR percentage (0%, 30%, 50%, and 70%, respectively), and were statistically different for the 4 ASIR groups (p<0.05). The subjective scores showed that the sharpness of portal vein boundaries (3.13±0.59, 2.82±0.44, 2.73±0.54 and 2.07±0.54) decreased with higher ASIR percentages (p<0.05). The subjective diagnostic acceptability was highest at 30% ASIR (p<0.05). Conclusions 30% ASIR addition in DECT portal venography could improve the 70 keV monochromatic image quality. PMID:27315158

  5. Bayesian Reconstruction of Disease Outbreaks by Combining Epidemiologic and Genomic Data

    PubMed Central

    Jombart, Thibaut; Cori, Anne; Didelot, Xavier; Cauchemez, Simon; Fraser, Christophe; Ferguson, Neil

    2014-01-01

    Recent years have seen progress in the development of statistically rigorous frameworks to infer outbreak transmission trees (“who infected whom”) from epidemiological and genetic data. Making use of pathogen genome sequences in such analyses remains a challenge, however, with a variety of heuristic approaches having been explored to date. We introduce a statistical method exploiting both pathogen sequences and collection dates to unravel the dynamics of densely sampled outbreaks. Our approach identifies likely transmission events and infers dates of infections, unobserved cases and separate introductions of the disease. It also proves useful for inferring numbers of secondary infections and identifying heterogeneous infectivity and super-spreaders. After testing our approach using simulations, we illustrate the method with the analysis of the beginning of the 2003 Singaporean outbreak of Severe Acute Respiratory Syndrome (SARS), providing new insights into the early stage of this epidemic. Our approach is the first tool for disease outbreak reconstruction from genetic data widely available as free software, the R package outbreaker. It is applicable to various densely sampled epidemics, and improves previous approaches by detecting unobserved and imported cases, as well as allowing multiple introductions of the pathogen. Because of its generality, we believe this method will become a tool of choice for the analysis of densely sampled disease outbreaks, and will form a rigorous framework for subsequent methodological developments. PMID:24465202

  6. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  7. Forensic Discrimination of Latent Fingerprints Using Laser-Induced Breakdown Spectroscopy (LIBS) and Chemometric Approaches.

    PubMed

    Yang, Jun-Ho; Yoh, Jack J

    2018-01-01

    A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.

  8. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2015-08-01

    Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size  >  1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.

  9. Synthesizing US Colonial Climate: Available Data and a "Proxy Adjustment" Method

    NASA Astrophysics Data System (ADS)

    Zalzal, K. S.; Munoz-Hernandez, A.; Arrigo, J. S.

    2008-12-01

    Climate and its variability is a primary driver of hydrologic systems. A paucity of instrumental data makes reconstructing seventeenth- and eighteenth-century climatic conditions along the Northeast corridor difficult, yet this information is necessary if we are to understand the conditions, changes and interactions society had with hydrosystems during this first period of permanent European settlement. For this period (approx. 1600- 1800) there are instrumental records for some regions such as annual temperature and precipitation data for Philadelphia beginning in 1738; Cambridge, Mass., from 1747-1776; and temperature for New Haven, Conn., from 1780 to 1800. There are also paleorecords, including tree-rings analyses and sediment core examinations of pollen and overwash deposits, and historical accounts of extreme weather events. Our analyses of these data show that correlating even the available data is less than straightforward. To produce a "best track" climate record, we introduce a new method of "paleoadjustment" as a means to characterize climate statistical properties as opposed to a strict reconstruction. Combining the instrumented record with the paleorecord, we estimated two sets of climate forcings to use in colonial hydrology study. The first utilized a recent instrumented record (1817-1917) from Baltimore, Md, statistically adjusted in 20-year windows to match trends in the paleorecords and anecdotal evidence from the Middle Colonies and Chesapeake Bay region. The second was a regression reconstruction for New England using climate indices developed from journal records and the Cambridge, Mass., instrumental record. The two climate reconstructions were used to compute the annual potential water yield over the 200-year period of interest. A comparison of these results allowed us to make preliminary conclusions regarding the effect of climate on hydrology during the colonial period. We contend that an understanding of historical hydrology will improve our ability to predict and react to changes in global water resources.

  10. Computational synchronization of microarray data with application to Plasmodium falciparum.

    PubMed

    Zhao, Wei; Dauwels, Justin; Niles, Jacquin C; Cao, Jianshu

    2012-06-21

    Microarrays are widely used to investigate the blood stage of Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data. We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC. By analyzing publicly available microarray data sets for the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions. This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.

  11. Effect of the image resolution on the statistical descriptors of heterogeneous media.

    PubMed

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  12. Effect of the image resolution on the statistical descriptors of heterogeneous media

    NASA Astrophysics Data System (ADS)

    Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime

    2018-02-01

    The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.

  13. Return to motor activity after anterior cruciate ligament reconstruction--pilot study.

    PubMed

    Stańczak, Katarzyna; Domżalski, Marcin; Synder, Marek; Sibiński, Marcin

    2014-01-01

    Background. Reconstruction surgery is the most frequent treatment for patients with anterior cruciate ligament (ACL) lesions. The goal of the study was to present patients' subjective evaluation of their return to motor activity after ACL reconstruction and investigate whether and what demographic or clinical factors determine the recovery of physical function of ACL reconstruction patients. Material and methods. The study involved a group of fifty (50) patients who underwent ACL reconstruction. The mean age of patients was 32 years. A questionnaire was used to collect data from the patients. The first part of the questionnaire was concerned with personal and clinical data, while the second part was the KOOS form. Results. The incidence of unfavourable, post-operative symptoms was lower in elderly patients, as well as in those with longer periods between injury and reconstruction. The patients in whom the patellar ligament was used for the reconstruction demonstrated better outcomes as regards returning to sports and recreational activity than those in whom flexor tendons were used. The patients who returned to practising a sport reported more pain episodes and problems with daily and sports activities. Their quality of life was inferior to those who did not return to unrestricted sports activity. Conclusions. 1. Neither sex nor BMI has any statistically significant effect on the recovery of mobility after ACL reconstruction. 2. ACL reconstruction with a graft harvested from the central band of the patellar ligament appears to be more appropriate for patients willing to return to full sports and recreational activity. 3. It is better to carry our ACL reconstruction when normal knee joint function has been regained and injury-related symptoms have subsided.

  14. The learning curve of robot-assisted laparoscopic aortofemoral bypass grafting for aortoiliac occlusive disease.

    PubMed

    Novotný, Tomáš; Dvorák, Martin; Staffa, Robert

    2011-02-01

    Since the end of the 20th century, robot-assisted surgery has been finding its role among other minimally invasive methods. Vascular surgery seems to be another specialty in which the benefits of this technology can be expected. Our objective was to assess the learning curve of robot-assisted laparoscopic aortofemoral bypass grafting for aortoiliac occlusive disease in a group of 40 patients. Between May 2006 and January 2010, 40 patients (32 men, 8 women), who were a median age of 58 years (range, 48-75 years), underwent 40 robot-assisted laparoscopic aortofemoral reconstructions. Learning curve estimations were used for anastomosis, clamping, and operative time assessment. For conversion rate evaluation, the cumulative summation (CUSUM) technique was used. Statistical analysis comparing the first and second half of our group, and unilateral-to-bilateral reconstructions were performed. We created 21 aortofemoral and 19 aortobifemoral bypasses. The median proximal anastomosis time was 23 minutes (range, 18-50 minutes), median clamping time was 60 minutes (range, 40-95 minutes), and median operative time was 295 minutes (range, 180-475 minutes). The 30-day mortality rate was 0%, and no graft or wound infection or cardiopulmonary or hepatorenal complications were observed. During the median 18-month follow-up (range, 2-48 months), three early graft occlusions occurred (7%). After reoperations, the secondary patency of reconstructions was 100%. Data showed a typical short learning curve for robotic proximal anastomosis creation with anastomosis and clamping time reduction. The operative time learning curve was flat, confirming the procedure's complexity. There were two conversions to open surgery. CUSUM analysis confirmed that an acceptable conversion rate set at 5% was achieved. Comparing the first and second half of our group, all recorded times showed statistically significant improvements. Differences between unilateral and bilateral reconstructions were not statistically significant. Our results show that the success rate of robot-assisted laparoscopic aortofemoral bypass grafting is high and the complication rate is low. Anastomosis creation, one of the main difficulties of laparoscopic bypass grafting, has been overcome using the robotic operating system and its learning curve is short. However, the endoscopic dissection of the aortoiliac segment remains the most difficult part of the operation and should be addressed in further development of the method to reduce the operative times. Long-term results and potential benefits of this minimally invasive method have to be verified by randomized controlled clinical trials. Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  15. Post traumatic brain perfusion SPECT analysis using reconstructed ROI maps of radioactive microsphere derived cerebral blood flow and statistical parametric mapping

    PubMed Central

    McGoron, Anthony J; Capille, Michael; Georgiou, Michael F; Sanchez, Pablo; Solano, Juan; Gonzalez-Brito, Manuel; Kuluz, John W

    2008-01-01

    Background Assessment of cerebral blood flow (CBF) by SPECT could be important in the management of patients with severe traumatic brain injury (TBI) because changes in regional CBF can affect outcome by promoting edema formation and intracranial pressure elevation (with cerebral hyperemia), or by causing secondary ischemic injury including post-traumatic stroke. The purpose of this study was to establish an improved method for evaluating regional CBF changes after TBI in piglets. Methods The focal effects of moderate traumatic brain injury (TBI) on cerebral blood flow (CBF) by SPECT cerebral blood perfusion (CBP) imaging in an animal model were investigated by parallelized statistical techniques. Regional CBF was measured by radioactive microspheres and by SPECT 2 hours after injury in sham-operated piglets versus those receiving severe TBI by fluid-percussion injury to the left parietal lobe. Qualitative SPECT CBP accuracy was assessed against reference radioactive microsphere regional CBF measurements by map reconstruction, registration and smoothing. Cerebral hypoperfusion in the test group was identified at the voxel level using statistical parametric mapping (SPM). Results A significant area of hypoperfusion (P < 0.01) was found as a response to the TBI. Statistical mapping of the reference microsphere CBF data confirms a focal decrease found with SPECT and SPM. Conclusion The suitability of SPM for application to the experimental model and ability to provide insight into CBF changes in response to traumatic injury was validated by the SPECT SPM result of a decrease in CBP at the left parietal region injury area of the test group. Further study and correlation of this characteristic lesion with long-term outcomes and auxiliary diagnostic modalities is critical to developing more effective critical care treatment guidelines and automated medical imaging processing techniques. PMID:18312639

  16. Imaging of neural oscillations with embedded inferential and group prevalence statistics.

    PubMed

    Donhauser, Peter W; Florin, Esther; Baillet, Sylvain

    2018-02-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.

  17. Imaging of neural oscillations with embedded inferential and group prevalence statistics

    PubMed Central

    2018-01-01

    Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902

  18. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  19. Avascular necrosis in children with cerebral palsy after reconstructive hip surgery

    PubMed Central

    Phillips, L.; Hesketh, K.; Schaeffer, E. K.; Andrade, J.; Farr, J.; Mulpuri, K.

    2017-01-01

    Abstract Purpose Progressive hip displacement is one of the most common orthopaedic pathologies in children with cerebral palsy (CP). Reconstructive hip surgery has become the standard treatment of care. Reported avascular necrosis (AVN) rates for hip reconstructive surgery in these patients vary widely in the literature. The purpose of this study is to identify the frequency and associated risk factors of AVN for reconstructive hip procedures. Methods A retrospective analysis was performed of 70 cases of reconstructive hip surgery in 47 children with CP, between 2009 and 2013. All 70 cases involved varus derotation osteotomy (VDRO), with 60% having combined VDRO and pelvic osteotomies (PO), and 21% requiring open reductions. Mean age at time of surgery was 8.82 years and 90% of patients were Gross Motor Function Classification System (GMFCS) 4 and 5. Radiographic dysplasia parameters were analysed at selected intervals, to a minimum of one year post-operatively. Severity of AVN was classified by Kruczynski's method. Bivar- iate statistical analysis was conducted using Chi-square test and Student's t-test. Results There were 19 (27%) noted cases of AVN, all radio- graphically identifiable within the first post-operative year. The majority of AVN cases (63%) were mild to moderate in severity. Pre-operative migration percentage (MP) (p = 0.0009) and post-operative change in MP (p = 0.002) were the most significant predictors of AVN. Other risk factors were: GMFCS level (p = 0.031), post-operative change in NSA (p = 0.02) and concomitant adductor tenotomy (0.028). Conclusion AVN was observed in 27% of patients. Severity of displacement correlates directly with AVN risk and we suggest that hip reconstruction, specifically VDRO, be performed early in the 'hip at risk' group to avoid this complication. PMID:29081846

  20. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  1. A meteo-hydrological modelling system for the reconstruction of river runoff: the case of the Ofanto river catchment

    NASA Astrophysics Data System (ADS)

    Verri, Giorgia; Pinardi, Nadia; Gochis, David; Tribbia, Joseph; Navarra, Antonio; Coppini, Giovanni; Vukicevic, Tomislava

    2017-10-01

    A meteo-hydrological modelling system has been designed for the reconstruction of long time series of rainfall and river runoff events. The modelling chain consists of the mesoscale meteorological model of the Weather Research and Forecasting (WRF), the land surface model NOAH-MP and the hydrology-hydraulics model WRF-Hydro. Two 3-month periods are reconstructed for winter 2011 and autumn 2013, containing heavy rainfall and river flooding events. Several sensitivity tests were performed along with an assessment of which tunable parameters, numerical choices and forcing data most impacted on the modelling performance.The calibration of the experiments highlighted that the infiltration and aquifer coefficients should be considered as seasonally dependent.The WRF precipitation was validated by a comparison with rain gauges in the Ofanto basin. The WRF model was demonstrated to be sensitive to the initialization time and a spin-up of about 1.5 days was needed before the start of the major rainfall events in order to improve the accuracy of the reconstruction. However, this was not sufficient and an optimal interpolation method was developed to correct the precipitation simulation. It is based on an objective analysis (OA) and a least square (LS) melding scheme, collectively named OA+LS. We demonstrated that the OA+LS method is a powerful tool to reduce the precipitation uncertainties and produce a lower error precipitation reconstruction that itself generates a better river discharge time series. The validation of the river streamflow showed promising statistical indices.The final set-up of our meteo-hydrological modelling system was able to realistically reconstruct the local rainfall and the Ofanto hydrograph.

  2. Digital Breast Tomosynthesis guided Near Infrared Spectroscopy: Volumetric estimates of fibroglandular fraction and breast density from tomosynthesis reconstructions

    PubMed Central

    Vedantham, Srinivasan; Shi, Linxi; Michaelsen, Kelly E.; Krishnaswamy, Venkataramanan; Pogue, Brian W.; Poplack, Steven P.; Karellas, Andrew; Paulsen, Keith D.

    2016-01-01

    A multimodality system combining a clinical prototype digital breast tomosynthesis with its imaging geometry modified to facilitate near-infrared spectroscopic imaging has been developed. The accuracy of parameters recovered from near-infrared spectroscopy is dependent on fibroglandular tissue content. Hence, in this study, volumetric estimates of fibroglandular tissue from tomosynthesis reconstructions were determined. A kernel-based fuzzy c-means algorithm was implemented to segment tomosynthesis reconstructed slices in order to estimate fibroglandular content and to provide anatomic priors for near-infrared spectroscopy. This algorithm was used to determine volumetric breast density (VBD), defined as the ratio of fibroglandular tissue volume to the total breast volume, expressed as percentage, from 62 tomosynthesis reconstructions of 34 study participants. For a subset of study participants who subsequently underwent mammography, VBD from mammography matched for subject, breast laterality and mammographic view was quantified using commercial software and statistically analyzed to determine if it differed from tomosynthesis. Summary statistics of the VBD from all study participants were compared with prior independent studies. The fibroglandular volume from tomosynthesis and mammography were not statistically different (p=0.211, paired t-test). After accounting for the compressed breast thickness, which were different between tomosynthesis and mammography, the VBD from tomosynthesis was correlated with (r =0.809, p<0.001), did not statistically differ from (p>0.99, paired t-test), and was linearly related to, the VBD from mammography. Summary statistics of the VBD from tomosynthesis were not statistically different from prior studies using high-resolution dedicated breast computed tomography. The observation of correlation and linear association in VBD between mammography and tomosynthesis suggests that breast density associated risk measures determined for mammography are translatable to tomosynthesis. Accounting for compressed breast thickness is important when it differs between the two modalities. The fibroglandular volume from tomosynthesis reconstructions is similar to mammography indicating suitability for use during near-infrared spectroscopy. PMID:26941961

  3. Dynamic dual-tracer PET reconstruction.

    PubMed

    Gao, Fei; Liu, Huafeng; Jian, Yiqiang; Shi, Pengcheng

    2009-01-01

    Although of important medical implications, simultaneous dual-tracer positron emission tomography reconstruction remains a challenging problem, primarily because the photon measurements from dual tracers are overlapped. In this paper, we propose a simultaneous dynamic dual-tracer reconstruction of tissue activity maps based on guidance from tracer kinetics. The dual-tracer reconstruction problem is formulated in a state-space representation, where parallel compartment models serve as continuous-time system equation describing the tracer kinetic processes of dual tracers, and the imaging data is expressed as discrete sampling of the system states in measurement equation. The image reconstruction problem has therefore become a state estimation problem in a continuous-discrete hybrid paradigm, and H infinity filtering is adopted as the estimation strategy. As H infinity filtering makes no assumptions on the system and measurement statistics, robust reconstruction results can be obtained for the dual-tracer PET imaging system where the statistical properties of measurement data and system uncertainty are not available a priori, even when there are disturbances in the kinetic parameters. Experimental results on digital phantoms, Monte Carlo simulations and physical phantoms have demonstrated the superior performance.

  4. Comparison of adaptive statistical iterative and filtered back projection reconstruction techniques in quantifying coronary calcium.

    PubMed

    Takahashi, Masahiro; Kimura, Fumiko; Umezawa, Tatsuya; Watanabe, Yusuke; Ogawa, Harumi

    2016-01-01

    Adaptive statistical iterative reconstruction (ASIR) has been used to reduce radiation dose in cardiac computed tomography. However, change of image parameters by ASIR as compared to filtered back projection (FBP) may influence quantification of coronary calcium. To investigate the influence of ASIR on calcium quantification in comparison to FBP. In 352 patients, CT images were reconstructed using FBP alone, FBP combined with ASIR 30%, 50%, 70%, and ASIR 100% based on the same raw data. Image noise, plaque density, Agatston scores and calcium volumes were compared among the techniques. Image noise, Agatston score, and calcium volume decreased significantly with ASIR compared to FBP (each P < 0.001). Use of ASIR reduced Agatston score by 10.5% to 31.0%. In calcified plaques both of patients and a phantom, ASIR decreased maximum CT values and calcified plaque size. In comparison to FBP, adaptive statistical iterative reconstruction (ASIR) may significantly decrease Agatston scores and calcium volumes. Copyright © 2016 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  5. Strategies of statistical windows in PET image reconstruction to improve the user’s real time experience

    NASA Astrophysics Data System (ADS)

    Moliner, L.; Correcher, C.; Gimenez-Alventosa, V.; Ilisie, V.; Alvarez, J.; Sanchez, S.; Rodríguez-Alvarez, M. J.

    2017-11-01

    Nowadays, with the increase of the computational power of modern computers together with the state-of-the-art reconstruction algorithms, it is possible to obtain Positron Emission Tomography (PET) images in practically real time. These facts open the door to new applications such as radio-pharmaceuticals tracking inside the body or the use of PET for image-guided procedures, such as biopsy interventions, among others. This work is a proof of concept that aims to improve the user experience with real time PET images. Fixed, incremental, overlapping, sliding and hybrid windows are the different statistical combinations of data blocks used to generate intermediate images in order to follow the path of the activity in the Field Of View (FOV). To evaluate these different combinations, a point source is placed in a dedicated breast PET device and moved along the FOV. These acquisitions are reconstructed according to the different statistical windows, resulting in a smoother transition of positions for the image reconstructions that use the sliding and hybrid window.

  6. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Hong, E-mail: gchen7@wisc.edu; Li, Yinsheng

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods:more » In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240{sup ∘} short scan angular span was divided into four consecutive 60{sup ∘} angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200{sup ∘}, three 66{sup ∘} angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60{sup ∘} angular subsectors.« less

  7. Investigation of Magnetotelluric Source Effect Based on Twenty Years of Telluric and Geomagnetic Observation

    NASA Astrophysics Data System (ADS)

    Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.

    2016-12-01

    Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.

  8. Asymptotic approximation method of force reconstruction: Application and analysis of stationary random forces

    NASA Astrophysics Data System (ADS)

    Sanchez, J.

    2018-06-01

    In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.

  9. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  10. Reanalysis of the 1893 heat wave in France through offline data assimilation in a downscaled ensemble meteorological reconstruction

    NASA Astrophysics Data System (ADS)

    Devers, Alexandre; Vidal, Jean-Philippe; Lauvernet, Claire; Graff, Benjamin

    2017-04-01

    The knowledge of historical French weather has recently been improved through the development of the SCOPE (Spatially COherent Probabilistic Extended) Climate reconstruction, a probabilistic high-resolution daily reconstruction of precipitation and temperature covering the period 1871-2012 and based on the statistical downscaling of the Twentieth Century Reanalysis (Caillouet et al., 2016). However, historical surface observations - even though rather scarce and sparse - do exist from at least the beginning of the period considered, and this information does not currently feed SCOPE Climate reconstructions. The goal of this study is therefore to assimilate these historical observations into SCOPE Climate reconstructions in order to build a 150-year meteorological reanalysis over France. This study considers "offline" data assimilation methods - Kalman filtering methods like the Ensemble Square Root Filter - that have successfully been used in recent paleoclimate studies, i.e. at much larger temporal and spatial scales (see e.g. Bhend et al., 2012). These methods are here applied for reconstructing the 8-24 August 1893 heat wave in France, using all available daily temperature observations from that period. Temperatures reached that summer were indeed compared at the time to those of Senegal (Garnier, 2012). Results show a spatially coherent view of the heat wave at the national scale as well as a reduced uncertainty compared to initial meteorological reconstructions, thus demonstrating the added value of data assimilation. In order to assess the performance of assimilation methods in a more recent context, these methods are also used to reconstruct the well-known 3-14 August 2003 heat wave by using (1) all available stations, and (2) the same station density as in August 1893, the rest of the observations being saved for validation. This analysis allows comparing two heat waves having occurred 100 years apart in France with different associated uncertainties, in terms of dynamics and intensity. Bhend, J., Franke, J., Folini, D., Wild, M., and Brönnimann, S.: An ensemble-based approach to climate reconstructions, Clim. Past, 8, 963-976, doi: 10.5194/cp-8-963-2012, 2012 Caillouet, L., Vidal, J-P., Sauquet, E., and Graff, B.: Probabilistic precipitation and temperature downscaling of the Twentieth Century Reanalysis over France, Clim. Past, 12, 635-662, doi: 10.5194/cp-12-635-2016, 2016. Garnier, E.: Sécheresses et canicules avant le Global Warming - 1500-1950. In: Canicules et froids extrêmes. L'Événement climatique et ses représentations (II) Histoire, littérature, peinture (Berchtlod, J., Le Roy ladurie, E., Sermain, J.-P., and Vasak, A., Eds.), 297-325, Hermann, 2012.

  11. Analysis of pitching velocity in major league baseball players before and after ulnar collateral ligament reconstruction.

    PubMed

    Jiang, Jimmy J; Leland, J Martin

    2014-04-01

    Ulnar collateral ligament (UCL) reconstructions are relatively common among professional pitchers in Major League Baseball (MLB). To the authors' knowledge, there has not been a study specifically analyzing pitching velocity after UCL surgery. These measurements were examined in a cohort of MLB pitchers before and after UCL reconstruction. There is no significant loss in pitch velocity after UCL reconstruction in MLB pitchers. Cohort study; Level of evidence, 3. Between the years 2008 to 2010, a total of 41 MLB pitchers were identified as players who underwent UCL reconstruction. Inclusion criteria for this study consisted of a minimum of 1 year of preinjury and 2 years of postinjury pitch velocity data. After implementing exclusion criteria, performance data were analyzed from 28 of the 41 pitchers over a minimum of 4 MLB seasons for each player. A pair-matched control group of pitchers who did not have a known UCL injury were analyzed for comparison. Of the initial 41 players, 3 were excluded for revision UCL reconstruction. Eight of the 38 players who underwent primary UCL reconstruction did not return to pitching at the major league level, and 2 players who met the exclusion criteria were omitted, leaving data on 28 players available for final velocity analysis. The mean percentage change in the velocity of pitches thrown by players who underwent UCL reconstruction was not significantly different compared with that of players in the control group. The mean innings pitched was statistically different only for the year of injury and the first postinjury year. There were also no statistically significant differences between the 2 groups with regard to commonly used statistical performance measurements, including earned run average, batting average against, walks per 9 innings, strikeouts per 9 innings, and walks plus hits per inning pitched. There were no significant differences in pitch velocity and common performance measurements between players who returned to MLB after UCL reconstruction and pair-matched controls.

  12. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  13. Combined Orbital Fractures: Surgical Strategy of Sequential Repair

    PubMed Central

    Hur, Su Won; Kim, Sung Eun; Chung, Kyu Jin; Lee, Jun Ho; Kim, Tae Gon

    2015-01-01

    Background Reconstruction of combined orbital floor and medial wall fractures with a comminuted inferomedial strut (IMS) is challenging and requires careful practice. We present our surgical strategy and postoperative outcomes. Methods We divided 74 patients who underwent the reconstruction of the orbital floor and medial wall concomitantly into a comminuted IMS group (41 patients) and non-comminuted IMS group (33 patients). In the comminuted IMS group, we first reconstructed the floor stably and then the medial wall by using separate implant pieces. In the non-comminuted IMS group, we reconstructed the floor and the medial wall with a single large implant. Results In the follow-up of 6 to 65 months, most patients with diplopia improved in the first-week except one, who eventually improved at 1 year. All patients with an EOM limitation improved during the first month of follow-up. Enophthalmos (displacement, 2 mm) was observed in two patients. The orbit volume measured on the CT scans was statistically significantly restored in both groups. No complications related to the surgery were observed. Conclusions We recommend the reconstruction of orbit walls in the comminuted IMS group by using the following surgical strategy: usage of multiple pieces of rigid implants instead of one large implant, sequential repair first of the floor and then of the medial wall, and a focus on the reconstruction of key areas. Our strategy of step-by-step reconstruction has the benefits of easy repair, less surgical trauma, and minimal stress to the surgeon. PMID:26217562

  14. Development and evaluation of a digital dental modeling method based on grating projection and reverse engineering software.

    PubMed

    Zhou, Qin; Wang, Zhenzhen; Chen, Jun; Song, Jun; Chen, Lu; Lu, Yi

    2016-01-01

    For reasons of convenience and economy, attempts have been made to transform traditional dental gypsum casts into 3-dimensional (3D) digital casts. Different scanning devices have been developed to generate digital casts; however, each has its own limitations and disadvantages. The purpose of this study was to develop an advanced method for the 3D reproduction of dental casts by using a high-speed grating projection system and noncontact reverse engineering (RE) software and to evaluate the accuracy of the method. The methods consisted of 3 main steps: the scanning and acquisition of 3D dental cast data with a high-resolution grating projection system, the reconstruction and measurement of digital casts with RE software, and the evaluation of the accuracy of this method using 20 dental gypsum casts. The common anatomic landmarks were measured directly on the gypsum casts with a Vernier caliper and on the 3D digital casts with the Geomagic software measurement tool. Data were statistically assessed with the t test. The grating projection system had a rapid scanning speed, and smooth 3D dental casts were obtained. The mean differences between the gypsum and 3D measurements were approximately 0.05 mm, and no statistically significant differences were found between the 2 methods (P>.05), except for the measurements of the incisor tooth width and maxillary arch length. A method for the 3D reconstruction of dental casts was developed by using a grating projection system and RE software. The accuracy of the casts generated using the grating projection system was comparable with that of the gypsum casts. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    PubMed

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  16. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  17. Estimating topological properties of weighted networks from limited information.

    PubMed

    Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2015-10-01

    A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.

  18. Estimating topological properties of weighted networks from limited information

    NASA Astrophysics Data System (ADS)

    Cimini, Giulio; Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2015-10-01

    A problem typically encountered when studying complex systems is the limitedness of the information available on their topology, which hinders our understanding of their structure and of the dynamical processes taking place on them. A paramount example is provided by financial networks, whose data are privacy protected: Banks publicly disclose only their aggregate exposure towards other banks, keeping individual exposures towards each single bank secret. Yet, the estimation of systemic risk strongly depends on the detailed structure of the interbank network. The resulting challenge is that of using aggregate information to statistically reconstruct a network and correctly predict its higher-order properties. Standard approaches either generate unrealistically dense networks, or fail to reproduce the observed topology by assigning homogeneous link weights. Here, we develop a reconstruction method, based on statistical mechanics concepts, that makes use of the empirical link density in a highly nontrivial way. Technically, our approach consists in the preliminary estimation of node degrees from empirical node strengths and link density, followed by a maximum-entropy inference based on a combination of empirical strengths and estimated degrees. Our method is successfully tested on the international trade network and the interbank money market, and represents a valuable tool for gaining insights on privacy-protected or partially accessible systems.

  19. Implementation of cross correlation for energy discrimination on the time-of-flight spectrometer CORELLI.

    PubMed

    Ye, Feng; Liu, Yaohua; Whitfield, Ross; Osborn, Ray; Rosenkranz, Stephan

    2018-04-01

    The CORELLI instrument at Oak Ridge National Laboratory is a statistical chopper spectrometer designed and optimized to probe complex disorder in crystalline materials through diffuse scattering experiments. On CORELLI, the high efficiency of white-beam Laue diffraction combined with elastic discrimination have enabled an unprecedented data collection rate to obtain both the total and the elastic-only scattering over a large volume of reciprocal space from a single measurement. To achieve this, CORELLI is equipped with a statistical chopper to modulate the incoming neutron beam quasi-randomly, and then the cross-correlation method is applied to reconstruct the elastic component from the scattering data. Details of the implementation of the cross-correlation method on CORELLI are given and its performance is discussed.

  20. Implementation of cross correlation for energy discrimination on the time-of-flight spectrometer CORELLI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Feng; Liu, Yaohua; Whitfield, Ross

    The CORELLI instrument at Oak Ridge National Laboratory is a statistical chopper spectrometer designed and optimized to probe complex disorder in crystalline materials through diffuse scattering experiments. On CORELLI, the high efficiency of white-beam Laue diffraction combined with elastic discrimination have enabled an unprecedented data collection rate to obtain both the total and the elastic-only scattering over a large volume of reciprocal space from a single measurement. To achieve this, CORELLI is equipped with a statistical chopper to modulate the incoming neutron beam quasi-randomly, and then the cross-correlation method is applied to reconstruct the elastic component from the scattering data.more » Lastly, details of the implementation of the cross-correlation method on CORELLI are given and its performance is discussed.« less

  1. Implementation of cross correlation for energy discrimination on the time-of-flight spectrometer CORELLI

    DOE PAGES

    Ye, Feng; Liu, Yaohua; Whitfield, Ross; ...

    2018-03-26

    The CORELLI instrument at Oak Ridge National Laboratory is a statistical chopper spectrometer designed and optimized to probe complex disorder in crystalline materials through diffuse scattering experiments. On CORELLI, the high efficiency of white-beam Laue diffraction combined with elastic discrimination have enabled an unprecedented data collection rate to obtain both the total and the elastic-only scattering over a large volume of reciprocal space from a single measurement. To achieve this, CORELLI is equipped with a statistical chopper to modulate the incoming neutron beam quasi-randomly, and then the cross-correlation method is applied to reconstruct the elastic component from the scattering data.more » Lastly, details of the implementation of the cross-correlation method on CORELLI are given and its performance is discussed.« less

  2. Accurate low-dose iterative CT reconstruction from few projections by Generalized Anisotropic Total Variation minimization for industrial CT.

    PubMed

    Debatin, Maurice; Hesser, Jürgen

    2015-01-01

    Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.

  3. Cervical vertebrae maturation index estimates on cone beam CT: 3D reconstructions vs sagittal sections

    PubMed Central

    Bonfim, Marco A E; Costa, André L F; Ximenez, Michel E L; Cotrim-Ferreira, Flávio A; Ferreira-Santos, Rívea I

    2016-01-01

    Objectives: The aim of this study was to evaluate the performance of CBCT three-dimensional (3D) reconstructions and sagittal sections for estimates of cervical vertebrae maturation index (CVMI). Methods: The sample consisted of 72 CBCT examinations from patients aged 8–16 years (45 females and 27 males) selected from the archives of two private clinics. Two calibrated observers (kappa scores: ≥0.901) interpreted the CBCT settings twice. Intra- and interobserver agreement for both imaging exhibition modes was analyzed by kappa statistics, which was also used to analyze the agreement between 3D reconstructions and sagittal sections. Correlations between cervical vertebrae maturation estimates and chronological age, as well as between the assessments by 3D reconstructions and sagittal sections, were analyzed using gamma Goodman–Kruskal coefficients (α = 0.05). Results: The kappa scores evidenced almost perfect agreement between the first and second assessments of the cervical vertebrae by 3D reconstructions (0.933–0.983) and sagittal sections (0.983–1.000). Similarly, the agreement between 3D reconstructions and sagittal sections was almost perfect (kappa index: 0.983). In most divergent cases, the difference between 3D reconstructions and sagittal sections was one stage of CVMI. Strongly positive correlations (>0.8, p < 0.001) were found not only between chronological age and CVMI but also between the estimates by 3D reconstructions and sagittal sections (p < 0.001). Conclusions: Although CBCT imaging must not be used exclusively for this purpose, it may be suitable for skeletal maturity assessments. PMID:26509559

  4. Spatial and contrast resolution of ultralow dose dentomaxillofacial CT imaging using iterative reconstruction technology

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2017-01-01

    Objectives: The objective of this study was to determine how iterative reconstruction technology (IRT) influences contrast and spatial resolution in ultralow-dose dentomaxillofacial CT imaging. Methods: A polymethyl methacrylate phantom with various inserts was scanned using a reference protocol (RP) at CT dose index volume 36.56 mGy, a sinus protocol at 18.28 mGy and ultralow-dose protocols (LD) at 4.17 mGy, 2.36 mGy, 0.99 mGy and 0.53 mGy. All data sets were reconstructed using filtered back projection (FBP) and the following IRTs: adaptive statistical iterative reconstructions (ASIRs) (ASIR-50, ASIR-100) and model-based iterative reconstruction (MBIR). Inserts containing line-pair patterns and contrast detail patterns for three different materials were scored by three observers. Observer agreement was analyzed using Cohen's kappa and difference in performance between the protocols and reconstruction was analyzed with Dunn's test at α = 0.05. Results: Interobserver agreement was acceptable with a mean kappa value of 0.59. Compared with the RP using FBP, similar scores were achieved at 2.36 mGy using MBIR. MIBR reconstructions showed the highest noise suppression as well as good contrast even at the lowest doses. Overall, ASIR reconstructions did not outperform FBP. Conclusions: LD and MBIR at a dose reduction of >90% may show no significant differences in spatial and contrast resolution compared with an RP and FBP. Ultralow-dose CT and IRT should be further explored in clinical studies. PMID:28059562

  5. Sellar Floor Reconstruction with the Medpor Implant Versus Autologous Bone After Transnasal Transsphenoidal Surgery: Outcome in 200 Consecutive Patients.

    PubMed

    Liebelt, Brandon D; Huang, Meng; Baskin, David S

    2015-08-01

    The Medpor porous polyethylene implant provides benefits to perform sellar floor reconstruction when indicated. This material has been used for cranioplasty and reconstruction of skull base defects and facial fractures. We present the most extensive use of this implant for sellar floor reconstruction and document the safety and benefits provided by this unique implant. The medical charts for 200 consecutive patients undergoing endonasal transsphenoidal surgery from April 2008 through December 2011 were reviewed. Material used for sellar floor reconstruction, pathologic diagnosis, immediate inpatient complications, and long-term complications were documented and analyzed. Outpatient follow-up was documented for a minimum of 1-year duration, extending in some patients up to 5 years. Of the 200 consecutive patients, 136 received sellar floor cranioplasty using the Medpor implant. Postoperative complications included 6 complaints of sinus irritation or drainage, 1 postoperative cerebrospinal fluid leak requiring operative re-exploration, 1 event of tension pneumocephalus requiring operative decompression, 1 case of aseptic meningitis, 1 subdural hematoma, and 1 case of epistaxis. The incidence of these complications did not differ from the autologous nasal bone group in a statistically significant manner. Sellar floor reconstruction remains an important part of transsphenoidal surgery to prevent postoperative complications. Various autologous and synthetic options are available to reconstruct the sellar floor, and the Medpor implant is a safe and effective option. The complication rate after surgery is equivalent to or less frequent than other methods of reconstruction and the implant is readily incorporated into host tissue after implantation, minimizing infectious risk. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Approximate message passing with restricted Boltzmann machine priors

    NASA Astrophysics Data System (ADS)

    Tramel, Eric W.; Drémeau, Angélique; Krzakala, Florent

    2016-07-01

    Approximate message passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problems. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernoulli prior which utilizes a restricted Boltzmann machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple i.i.d. priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.

  7. Your place or mine? A phylogenetic comparative analysis of marital residence in Indo-European and Austronesian societies

    PubMed Central

    Fortunato, Laura; Jordan, Fiona

    2010-01-01

    Accurate reconstruction of prehistoric social organization is important if we are to put together satisfactory multidisciplinary scenarios about, for example, the dispersal of human groups. Such considerations apply in the case of Indo-European and Austronesian, two large-scale language families that are thought to represent Neolithic expansions. Ancestral kinship patterns have mostly been inferred through reconstruction of kin terminologies in ancestral proto-languages using the linguistic comparative method, and through geographical or distributional arguments based on the comparative patterns of kin terms and ethnographic kinship ‘facts’. While these approaches are detailed and valuable, the processes through which conclusions have been drawn from the data fail to provide explicit criteria for systematic testing of alternative hypotheses. Here, we use language trees derived using phylogenetic tree-building techniques on Indo-European and Austronesian vocabulary data. With these trees, ethnographic data and Bayesian phylogenetic comparative methods, we statistically reconstruct past marital residence and infer rates of cultural change between different residence forms, showing Proto-Indo-European to be virilocal and Proto-Malayo-Polynesian uxorilocal. The instability of uxorilocality and the rare loss of virilocality once gained emerge as common features of both families. PMID:21041215

  8. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  9. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    NASA Astrophysics Data System (ADS)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.

  10. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  11. Construct and Compare Gene Coexpression Networks with DAPfinder and DAPview.

    PubMed

    Skinner, Jeff; Kotliarov, Yuri; Varma, Sudhir; Mine, Karina L; Yambartsev, Anatoly; Simon, Richard; Huyen, Yentram; Morgun, Andrey

    2011-07-14

    DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Tianyu; Xu, Hongyi; Chen, Wei

    Fiber-reinforced polymer composites are strong candidates for structural materials to replace steel and light alloys in lightweight vehicle design because of their low density and relatively high strength. In the integrated computational materials engineering (ICME) development of carbon fiber composites, microstructure reconstruction algorithms are needed to generate material microstructure representative volume element (RVE) based on the material processing information. The microstructure RVE reconstruction enables the material property prediction by finite element analysis (FEA)This paper presents an algorithm to reconstruct the microstructure of a chopped carbon fiber/epoxy laminate material system produced by compression molding, normally known as sheet molding compounds (SMC).more » The algorithm takes the result from material’s manufacturing process as inputs, such as the orientation tensor of fibers, the chopped fiber sheet geometry, and the fiber volume fraction. The chopped fiber sheets are treated as deformable rectangle chips and a random packing algorithm is developed to pack these chips into a square plate. The RVE is built in a layer-by-layer fashion until the desired number of lamina is reached, then a fine tuning process is applied to finalize the reconstruction. Compared to the previous methods, this new approach has the ability to model bended fibers by allowing limited amount of overlaps of rectangle chips. Furthermore, the method does not need SMC microstructure images, for which the image-based characterization techniques have not been mature enough, as inputs. Case studies are performed and the results show that the statistics of the reconstructed microstructures generated by the algorithm matches well with the target input parameters from processing.« less

  13. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  14. Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC

    NASA Astrophysics Data System (ADS)

    Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.

    2018-03-01

    This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.

  15. Postmastectomy Chest Wall Radiation to a Temporary Tissue Expander or Permanent Breast Implant-Is There a Difference in Complication Rates?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Penny R.; Freedman, Gary; Nicolaou, Nicos

    Purpose: The purpose of this study was to evaluate the likelihood of complications and cosmetic results among breast cancer patients who underwent modified radical mastectomy (MRM) and breast reconstruction followed by radiation therapy (RT) to either a temporary tissue expander (TTE) or permanent breast implant (PI). Methods and Materials: Records were reviewed of 74 patients with breast cancer who underwent MRM followed by breast reconstruction and RT. Reconstruction consisted of a TTE usually followed by exchange to a PI. RT was delivered to the TTE in 62 patients and to the PI in 12 patients. Dose to the reconstructed chestmore » wall was 50 Gy. Median follow-up was 48 months. The primary end point was the incidence of complications involving the reconstruction. Results: There was no significant difference in the rate of major complications in the PI group (0%) vs. 4.8% in the TTE group. No patients lost the reconstruction in the PI group. Three patients lost the reconstruction in the TTE group. There were excellent/good cosmetic scores in 90% of the TTE group and 80% of the PI group (p = 0.22). On multivariate regression models, the type of reconstruction irradiated had no statistically significant impact on complication rates. Conclusions: Patients treated with breast reconstruction and RT can experience low rates of major complications. We demonstrate no significant difference in the overall rate of major or minor complications between the TTE and PI groups. Postmastectomy RT to either the TTE or the PI should be considered as acceptable treatment options in all eligible patients.« less

  16. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  17. Image Reconstruction for Hybrid True-Color Micro-CT

    PubMed Central

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2013-01-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806

  18. ANATOMICAL RECONSTRUCTION OF ANTERIOR CRUCIATE LIGAMENT OF THE KNEE: DOUBLE BAND OR SINGLE BAND?

    PubMed Central

    Zanella, Luiz Antonio Zanotelli; Junior, Adair Bervig; Badotti, Augusto Alves; Michelin, Alexandre Froes; Algarve, Rodrigo Ilha; de Quadros Martins, Cesar Antonio

    2015-01-01

    Objective: To evaluate the double-band and single-band techniques for anatomical reconstruction of the anterior cruciate ligament of the knee and demonstrate that the double-band technique not only provides greater anterior stability but also causes less pain and a better subjective patient response. Methods: We selected 42 patients who underwent anterior cruciate ligament reconstruction, by means of either the single-band anatomical reconstruction technique, using flexor tendon grafts with two tunnels, or the double-band anatomical reconstruction technique, using four tunnels and grafts from the semitendinosus and gracilis tendons. All fixations were performed using interference screws. There was no variation in the sample. Before the operation, the objective and subjective IKDC scores, Lysholm score and length of time with the injury were evaluated. All these variables were reassessed six months later, and the KT-1000 correlation with the contralateral knee was also evaluated. Results: There was no significant difference between the two groups in subjective evaluations, but the single-band group showed better results in relation to range of motion and objective evaluations including KT-1000 (with statistical significance). Conclusion: Our study demonstrated that there was no difference between the two groups in subjective evaluations, but better results were found using the single-band anatomical technique, in relation to objective evaluations. PMID:27042621

  19. Nature of Driving Force for Protein Folding-- A Result From Analyzing the Statistical Potential

    NASA Astrophysics Data System (ADS)

    Li, Hao; Tang, Chao; Wingreen, Ned S.

    1998-03-01

    In a statistical approach to protein structure analysis, Miyazawa and Jernigan (MJ) derived a 20× 20 matrix of inter-residue contact energies between different types of amino acids. Using the method of eigenvalue decomposition, we find that the MJ matrix can be accurately reconstructed from its first two principal component vectors as M_ij=C_0+C_1(q_i+q_j)+C2 qi q_j, with constant C's, and 20 q values associated with the 20 amino acids. This regularity is due to hydrophobic interactions and a force of demixing, the latter obeying Hildebrand's solubility theory of simple liquids.

  20. A general reconstruction of the recent expansion history of the universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitenti, S.D.P.; Penna-Lima, M., E-mail: dias@iap.fr, E-mail: pennal@apc.in2p3.fr

    Distance measurements are currently the most powerful tool to study the expansion history of the universe without specifying its matter content nor any theory of gravitation. Assuming only an isotropic, homogeneous and flat universe, in this work we introduce a model-independent method to reconstruct directly the deceleration function via a piecewise function. Including a penalty factor, we are able to vary continuously the complexity of the deceleration function from a linear case to an arbitrary (n+1)-knots spline interpolation. We carry out a Monte Carlo (MC) analysis to determine the best penalty factor, evaluating the bias-variance trade-off, given the uncertainties ofmore » the SDSS-II and SNLS supernova combined sample (JLA), compilations of baryon acoustic oscillation (BAO) and H(z) data. The bias-variance analysis is done for three fiducial models with different features in the deceleration curve. We perform the MC analysis generating mock catalogs and computing their best-fit. For each fiducial model, we test different reconstructions using, in each case, more than 10{sup 4} catalogs in a total of about 5× 10{sup 5}. This investigation proved to be essential in determining the best reconstruction to study these data. We show that, evaluating a single fiducial model, the conclusions about the bias-variance ratio are misleading. We determine the reconstruction method in which the bias represents at most 10% of the total uncertainty. In all statistical analyses, we fit the coefficients of the deceleration function along with four nuisance parameters of the supernova astrophysical model. For the full sample, we also fit H{sub 0} and the sound horizon r{sub s}(z{sub d}) at the drag redshift. The bias-variance trade-off analysis shows that, apart from the deceleration function, all other estimators are unbiased. Finally, we apply the Ensemble Sampler Markov Chain Monte Carlo (ESMCMC) method to explore the posterior of the deceleration function up to redshift 1.3 (using only JLA) and 2.3 (JLA+BAO+H(z)). We obtain that the standard cosmological model agrees within 3σ level with the reconstructed results in the whole studied redshift intervals. Since our method is calibrated to minimize the bias, the error bars of the reconstructed functions are a good approximation for the total uncertainty.« less

  1. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ke; Tang, Jie; Chen, Guang-Hong, E-mail: gchen7@wisc.edu

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD,more » GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared with FBP. (6) A composite image generated from two MBIR images acquired at two different dose levels (D1 and D2) demonstrated lower noise than that of an image acquired at a dose level of D1+D2. Conclusions: The noise characteristics of the MBIR method are significantly different from those of the FBP method. The well known tradeoff relationship between CT image noise and radiation dose has been modified by MBIR to establish a more gradual dependence of noise on dose. Additionally, some other CT noise properties that had been well understood based on the linear system theory have also been altered by MBIR. Clinical CT scan protocols that had been optimized based on the classical CT noise properties need to be carefully re-evaluated for systems equipped with MBIR in order to maximize the method's potential clinical benefits in dose reduction and/or in CT image quality improvement.« less

  2. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  3. Pollen-based reconstruction of Holocene climate variability in the Eifel region evaluated with stable isotopes

    NASA Astrophysics Data System (ADS)

    Kühl, Norbert; Moschen, Robert; Wagner, Stefanie

    2010-05-01

    Pollen as well as stable isotopes have great potential as climate proxy data. While variability in these proxy data is frequently assumed to reflect climate variability, other factors than climate, including human impact and statistical noise, can often not be excluded as primary cause for the observed variability. Multiproxy studies offer the opportunity to test different drivers by providing different lines of evidence for environmental change such as climate variability and human impact. In this multiproxy study we use pollen and peat humification to evaluate to which extent stable oxygen and carbon isotope series from the peat bog "Dürres Maar" reflect human impact rather than climate variability. For times before strong anthropogenic vegetation change, isotope series from Dürres Maar were used to validate quantitative reconstructions based on pollen. Our study site is the kettle hole peat bog "Dürres Maar" in the Eifel low mountain range, Germany (450m asl), which grew 12m during the last 10,000 years. Pollen was analysed with a sum of at least 1000 terrestrial pollen grains throughout the profile to minimize statistical effects on the reconstructions. A recently developed probabilistic indicator taxa method ("pdf-method") was used for the quantitative climate estimates (January and July temperature) based on pollen. For isotope analysis, attention was given to use monospecific Sphagnum leaves whenever possible, reducing the potential of a species effect and any potential artefact that can originate from selective degradation of different morphological parts of Sphagnum plants (Moschen et al., 2009). Pollen at "Dürres Maar" reflect the variable and partly strong human impact on vegetation during the last 4000 years. Stable isotope time series were apparently not influenced by human impact at this site. This highlights the potential of stable isotope investigations from peat for climatic interpretation, because stable isotope series from lacustrine sediments might strongly react to anthropogenic deforestation, as carbon isotope time series from the adjacent Lake Holzmaar suggest. Reconstructions based on pollen with the pdf-method are robust to the human impact during the last 4000 years, but do not reproduce the fine scale climate variability that can be derived from the stable isotope series (Kühl et al., in press). In contrast, reconstructions on the basis of pollen data show relatively pronounced climate variability (here: January temperature) during the Mid-Holocene, which is known from many other European records. The oxygen isotope time series as available now indicate that at least some of the observed variability indeed reflects climate variability. However, stable carbon isotopes show little concordance. At this stage our results point in the direction that 1) the isotopic composition might reflect a shift in influencing factors during the Holocene, 2) climate trends can robustly be reconstructed with the pdf method and 3) fine scale climate variability can potentially be reconstructed using the pdf-method, given that climate sensitive taxa at their distribution limit are present. The latter two conclusions are of particular importance for the reconstruction of climatic trends and variability of interglacials older than the Holocene, when sites are rare and pollen is often the only suitable proxy in terrestrial records. Kühl, N., Moschen, R., Wagner, S., Brewer, S., Peyron, O., in press. A multiproxy record of Late Holocene natural and anthropogenic environmental change from the Sphagnum peat bog Dürres Maar, Germany: implications for quantitative climate reconstructions based on pollen. J. Quat. Sci., DOI: 10.1002/jqs.1342. Available online. Moschen, R., Kühl, N., Rehberger, I., Lücke, A., 2009. Stable carbon and oxygen isotopes in sub-fossil Sphagnum: Assessment of their applicability for palaeoclimatology. Chemical Geology 259, 262-272.

  4. Reconstruction of the Foot and Ankle Using Pedicled or Free Flaps: Perioperative Flap Survival Analysis

    PubMed Central

    Li, Xiucun; Cui, Jianli; Maharjan, Suraj; Lu, Laijin; Gong, Xu

    2016-01-01

    Objective The purpose of this study is to determine the correlation between non-technical risk factors and the perioperative flap survival rate and to evaluate the choice of skin flap for the reconstruction of foot and ankle. Methods This was a clinical retrospective study. Nine variables were identified. The Kaplan-Meier method coupled with a log-rank test and a Cox regression model was used to predict the risk factors that influence the perioperative flap survival rate. The relationship between postoperative wound infection and risk factors was also analyzed using a logistic regression model. Results The overall flap survival rate was 85.42%. The necrosis rates of free flaps and pedicled flaps were 5.26% and 20.69%, respectively. According to the Cox regression model, flap type (hazard ratio [HR] = 2.592; 95% confidence interval [CI] (1.606, 4.184); P < 0.001) and postoperative wound infection (HR = 0.266; 95% CI (0.134, 0.529); P < 0.001) were found to be statistically significant risk factors associated with flap necrosis. Based on the logistic regression model, preoperative wound bed inflammation (odds ratio [OR] = 11.371,95% CI (3.117, 41.478), P < 0.001) was a statistically significant risk factor for postoperative wound infection. Conclusion Flap type and postoperative wound infection were both independent risk factors influencing the flap survival rate in the foot and ankle. However, postoperative wound infection was a risk factor for the pedicled flap but not for the free flap. Microvascular anastomosis is a major cause of free flap necrosis. To reconstruct complex or wide soft tissue defects of the foot or ankle, free flaps are safer and more reliable than pedicled flaps and should thus be the primary choice. PMID:27930679

  5. Clinical evaluation of the aberrant left hepatic artery arising from the left gastric artery in esophagectomy.

    PubMed

    Maki, Harufumi; Satodate, Hitoshi; Satou, Shouichi; Nakajima, Kentaro; Nagao, Atsuki; Watanabe, Kazuteru; Nara, Satoshi; Furushima, Kaoru; Harihara, Yasushi

    2018-04-12

    The left gastric artery (LGA) is commonly severed when the gastric tube is made for esophageal reconstruction. Sacrifice of the LGA can cause liver ischemic necrosis in patients with an aberrant left hepatic artery (ALHA) arising from the LGA. We experienced a case of life-threatening hepatic abscess after severing the ALHA. Therefore, the purpose of this study is to evaluate clinical outcomes of severing the ALHA. We retrospectively enrolled 176 consecutive patients who underwent esophagectomy with gastric tube reconstruction. They were classified into the ALHA (N = 16, 9.1%) and non-ALHA (N = 160, 90.9%) groups. Univariate analysis was performed to compare the clinicopathological variables. Long-term survival was analyzed using the Kaplan-Meier method in matched pair case-control analysis. The postoperative morbidities were not statistically different between the two groups, although serum alanine aminotransferase levels on postoperative days 1 and 3 were significantly higher in the ALHA group (36 IU/L, 14-515; 32 IU/L, 13-295) than in the non-ALHA group (24 IU/L, 8-163; 19 IU/L, 6-180), respectively (p = 0.0055; p = 0.0073). Overall survival was not statistically different between the two groups (p = 0.26). Severe hepatic abscess occurred in 6.3% of the patients with the ALHA after esophagectomy, even though the results presented here found no statistical differences in morbidity or mortality with or without the ALHA. Surgeons should probably attempt to preserve the ALHA especially in patients with altered liver function while making a gastric tube for esophageal reconstruction.

  6. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  7. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  8. Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty

    NASA Astrophysics Data System (ADS)

    Brumble, K. C.

    2012-12-01

    What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.

  9. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: experimental assessment of noise performance.

    PubMed

    Li, Ke; Tang, Jie; Chen, Guang-Hong

    2014-04-01

    To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo(®), GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a "redder" NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose)(-β) with the component β ≈ 0.25, which violated the classical σ ∝ (dose)(-0.5) power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial uniformity when compared with FBP. (6) A composite image generated from two MBIR images acquired at two different dose levels (D1 and D2) demonstrated lower noise than that of an image acquired at a dose level of D1+D2. The noise characteristics of the MBIR method are significantly different from those of the FBP method. The well known tradeoff relationship between CT image noise and radiation dose has been modified by MBIR to establish a more gradual dependence of noise on dose. Additionally, some other CT noise properties that had been well understood based on the linear system theory have also been altered by MBIR. Clinical CT scan protocols that had been optimized based on the classical CT noise properties need to be carefully re-evaluated for systems equipped with MBIR in order to maximize the method's potential clinical benefits in dose reduction and/or in CT image quality improvement. © 2014 American Association of Physicists in Medicine.

  10. Feasibility Study of Radiation Dose Reduction in Adult Female Pelvic CT Scan with Low Tube-Voltage and Adaptive Statistical Iterative Reconstruction

    PubMed Central

    Wang, Xinlian; Chen, Jianghong; Hu, Zhihai; Zhao, Liqin

    2015-01-01

    Objective To evaluate image quality of female pelvic computed tomography (CT) scans reconstructed with the adaptive statistical iterative reconstruction (ASIR) technique combined with low tube-voltage and to explore the feasibility of its clinical application. Materials and Methods Ninety-four patients were divided into two groups. The study group used 100 kVp, and images were reconstructed with 30%, 50%, 70%, and 90% ASIR. The control group used 120 kVp, and images were reconstructed with 30% ASIR. The noise index was 15 for the study group and 11 for the control group. The CT values and noise levels of different tissues were measured. The contrast to noise ratio (CNR) was calculated. A subjective evaluation was carried out by two experienced radiologists. The CT dose index volume (CTDIvol) was recorded. Results A 44.7% reduction in CTDIvol was observed in the study group (8.18 ± 3.58 mGy) compared with that in the control group (14.78 ± 6.15 mGy). No significant differences were observed in the tissue noise levels and CNR values between the 70% ASIR group and the control group (p = 0.068-1.000). The subjective scores indicated that visibility of small structures, diagnostic confidence, and the overall image quality score in the 70% ASIR group was the best, and were similar to those in the control group (1.87 vs. 1.79, 1.26 vs. 1.28, and 4.53 vs. 4.57; p = 0.122-0.585). No significant difference in diagnostic accuracy was detected between the study group and the control group (42/47 vs. 43/47, p = 1.000). Conclusion Low tube-voltage combined with automatic tube current modulation and 70% ASIR allowed the low CT radiation dose to be reduced by 44.7% without losing image quality on female pelvic scan. PMID:26357499

  11. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  12. Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.

    PubMed

    Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz

    2017-01-01

    Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.

  13. The cerebellar development in chinese children-a study by voxel-based volume measurement of reconstructed 3D MRI scan.

    PubMed

    Wu, Kuan-Hsun; Chen, Chia-Yuan; Shen, Ein-Yiao

    2011-01-01

    Cerebellar disorder was frequently reported to have relation with structural brain volume alteration and/or morphology change. In dealing with such clinical situations, we need a convenient and noninvasive imaging tool to provide clinicians with a means of tracing developmental changes in the cerebellum. Herein, we present a new daily practice method for cerebellum imaging that uses a work station and a software program to process reconstructed 3D neuroimages after MRI scanning. In a 3-y period, 3D neuroimages reconstructed from MRI scans of 50 children aged 0.2-12.7 y were taken. The resulting images were then statistically analyzed against a growth curve. We observed a remarkable increase in the size of the cerebellum in the first 2 y of life. Furthermore, the unmyelinated cerebellum grew mainly between birth and 2 y of age in the postnatal stage. In contrast, the postnatal development of the brain mainly depended on the growth of myelinated cerebellum from birth through adolescence. This study presents basic data from a study of ethnic Chinese children's cerebellums using reconstructed 3D brain images. Based on the technique we introduce here, clinicians can evaluate the growth of the brain.

  14. Dose fractionation theorem in 3-D reconstruction (tomography)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glaeser, R.M.

    It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resultedmore » in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.« less

  15. Undersampling strategies for compressed sensing accelerated MR spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Vidya Shankar, Rohini; Hu, Houchun Harry; Bikkamane Jayadev, Nutandev; Chang, John C.; Kodibagkar, Vikram D.

    2017-03-01

    Compressed sensing (CS) can accelerate magnetic resonance spectroscopic imaging (MRSI), facilitating its widespread clinical integration. The objective of this study was to assess the effect of different undersampling strategy on CS-MRSI reconstruction quality. Phantom data were acquired on a Philips 3 T Ingenia scanner. Four types of undersampling masks, corresponding to each strategy, namely, low resolution, variable density, iterative design, and a priori were simulated in Matlab and retrospectively applied to the test 1X MRSI data to generate undersampled datasets corresponding to the 2X - 5X, and 7X accelerations for each type of mask. Reconstruction parameters were kept the same in each case(all masks and accelerations) to ensure that any resulting differences can be attributed to the type of mask being employed. The reconstructed datasets from each mask were statistically compared with the reference 1X, and assessed using metrics like the root mean square error and metabolite ratios. Simulation results indicate that both the a priori and variable density undersampling masks maintain high fidelity with the 1X up to five-fold acceleration. The low resolution mask based reconstructions showed statistically significant differences from the 1X with the reconstruction failing at 3X, while the iterative design reconstructions maintained fidelity with the 1X till 4X acceleration. In summary, a pilot study was conducted to identify an optimal sampling mask in CS-MRSI. Simulation results demonstrate that the a priori and variable density masks can provide statistically similar results to the fully sampled reference. Future work would involve implementing these two masks prospectively on a clinical scanner.

  16. Teaching ear reconstruction using an alloplastic carving model.

    PubMed

    Murabit, Amera; Anzarut, Alexander; Kasrai, Laila; Fisher, David; Wilkes, Gordon

    2010-11-01

    Ear reconstruction is challenging surgery, often with poor outcomes. Our purpose was to develop a surgical training model for auricular reconstruction. Silicone costal cartilage models were incorporated in a workshop-based instructional program. Trainees were randomly divided. Workshop group (WG) participated in an interactive session, carving frameworks under supervision. Nonworkshop group (NWG) did not participate. Standard Nagata templates were used. Two further frameworks were created, first with supervision then without. Groups were combined after the first carving because of frustration in the NWG. Assessment was completed by 3 microtia surgeons from 2 different centers, blinded to framework origin. Frameworks were rated out of 10 using Likert and visual analog scales. Results were examined using SPSS (version 14), with t test, ANOVA, and Bonferroni post hoc analyses. Cartilaginous frameworks from the WG scored better for the first carving (WG 5.5 vs NWG 4.4), the NWG improved for the second carving (WG 6.6 vs NWG 6.5), and both groups scored lower with the third unsupervised carving (WG 5.9 vs NWG 5.6). Combined scores after 3 frameworks were not statistically significantly different between original groups. A statistically significant improvement was demonstrated for all carvers between sessions 1 and 2 (P ≤ 0.09), between sessions 1 and 3 (P ≤ 0.05), but not between sessions 2 and 3, thus suggesting the necessity of in vitro practice until high scores are achieved and maintained without supervision before embarking on in vivo carvings. Quality of carvings was not related to level of training. An appropriate and applicable surgical training model and training method can aid in attaining skills necessary for successful auricular reconstruction.

  17. Automatic pedicles detection using convolutional neural network in a 3D spine reconstruction from biplanar radiographs

    NASA Astrophysics Data System (ADS)

    Bakhous, Christine; Aubert, Benjamin; Vazquez, Carlos; Cresson, Thierry; Parent, Stefan; De Guise, Jacques

    2018-02-01

    The 3D analysis of the spine deformities (scoliosis) has a high potential in its clinical diagnosis and treatment. In a biplanar radiographs context, a 3D analysis requires a 3D reconstruction from a pair of 2D X-rays. Whether being fully-/semiautomatic or manual, this task is complex because of the noise, the structure superimposition and partial information due to a limited projections number. Being involved in the axial vertebra rotation (AVR), which is a fundamental clinical parameter for scoliosis diagnosis, pedicles are important landmarks for the 3D spine modeling and pre-operative planning. In this paper, we focus on the extension of a fully-automatic 3D spine reconstruction method where the Vertebral Body Centers (VBCs) are automatically detected using Convolutional Neural Network (CNN) and then regularized using a Statistical Shape Model (SSM) framework. In this global process, pedicles are inferred statistically during the SSM regularization. Our contribution is to add a CNN-based regression model for pedicle detection allowing a better pedicle localization and improving the clinical parameters estimation (e.g. AVR, Cobb angle). Having 476 datasets including healthy patients and Adolescent Idiopathic Scoliosis (AIS) cases with different scoliosis grades (Cobb angles up to 116°), we used 380 for training, 48 for testing and 48 for validation. Adding the local CNN-based pedicle detection decreases the mean absolute error of the AVR by 10%. The 3D mean Euclidian distance error between detected pedicles and ground truth decreases by 17% and the maximum error by 19%. Moreover, a general improvement is observed in the 3D spine reconstruction and reflected in lower errors on the Cobb angle estimation.

  18. Does Fibrin Sealant Reduce Seroma after Immediate Breast Reconstruction Utilizing a Latissimus Dorsi Myocutaneous Flap?

    PubMed Central

    Cha, Han Gyu; Shin, Ho Seong; Kang, Moon Seok; Nam, Seung Min

    2012-01-01

    Background The most common complication of latissimus dorsi myocutaneous flap in breast reconstruction is seroma formation in the back. Many clinical studies have shown that fibrin sealant reduces seroma formation. We investigated any statistically significant differences in postoperative drainage and seroma formation when utilizing the fibrin sealant on the site of the latissimus dorsi myocutaneous flap harvested for immediate breast reconstruction after skin-sparing partial mastectomy. Methods A total of 46 patients underwent immediate breast reconstruction utilizing a latissimus dorsi myocutaneous island flap. Of those, 23 patients underwent the procedure without fibrin sealant and the other 23 were administered the fibrin sealant. All flaps were elevated with manual dissection by the same surgeon and were analyzed to evaluate the potential benefits of the fibrin sealant. The correlation analysis and Mann-Whitney U test were used for analyzing the drainage volume according to age, weight of the breast specimen, and body mass index. Results Although not statistically significant, the cumulative drainage fluid volume was higher in the control group until postoperative day 2 (530.1 mL compared to 502.3 mL), but the fibrin sealant group showed more drainage beginning on postoperative day 3. The donor site comparisons showed the fibrin sealant group had more drainage beginning on postoperative day 3 and the drain was removed 1 day earlier in the control group. Conclusions The use of fibrin sealant resulted in no reduction of seroma formation. Because the benefits of the fibrin sealant are not clear, the use of fibrin sealant must be fully discussed with patients before its use as a part of informed consent. PMID:23094246

  19. Cone-beam CT of traumatic brain injury using statistical reconstruction with a post-artifact-correction noise model

    NASA Astrophysics Data System (ADS)

    Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-03-01

    Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.

  20. Effect of reconstruction methods and x-ray tube current–time product on nodule detection in an anthropomorphic thorax phantom: A crossed-modality JAFROC observer study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, J. D., E-mail: j.d.thompson@salford.ac.uk; Chakraborty, D. P.; Szczepura, K.

    Purpose: To evaluate nodule detection in an anthropomorphic chest phantom in computed tomography (CT) images reconstructed with adaptive iterative dose reduction 3D (AIDR{sup 3D}) and filtered back projection (FBP) over a range of tube current–time product (mAs). Methods: Two phantoms were used in this study: (i) an anthropomorphic chest phantom was loaded with spherical simulated nodules of 5, 8, 10, and 12 mm in diameter and +100, −630, and −800 Hounsfield units electron density; this would generate CT images for the observer study; (ii) a whole-body dosimetry verification phantom was used to ultimately estimate effective dose and risk according tomore » the model of the BEIR VII committee. Both phantoms were scanned over a mAs range (10, 20, 30, and 40), while all other acquisition parameters remained constant. Images were reconstructed with both AIDR{sup 3D} and FBP. For the observer study, 34 normal cases (no nodules) and 34 abnormal cases (containing 1–3 nodules, mean 1.35 ± 0.54) were chosen. Eleven observers evaluated images from all mAs and reconstruction methods under the free-response paradigm. A crossed-modality jackknife alternative free-response operating characteristic (JAFROC) analysis method was developed for data analysis, averaging data over the two factors influencing nodule detection in this study: mAs and image reconstruction (AIDR{sup 3D} or FBP). A Bonferroni correction was applied and the threshold for declaring significance was set at 0.025 to maintain the overall probability of Type I error at α = 0.05. Contrast-to-noise (CNR) was also measured for all nodules and evaluated by a linear least squares analysis. Results: For random-reader fixed-case crossed-modality JAFROC analysis, there was no significant difference in nodule detection between AIDR{sup 3D} and FBP when data were averaged over mAs [F(1, 10) = 0.08, p = 0.789]. However, when data were averaged over reconstruction methods, a significant difference was seen between multiple pairs of mAs settings [F(3, 30) = 15.96, p < 0.001]. Measurements of effective dose and effective risk showed the expected linear dependence on mAs. Nodule CNR was statistically higher for simulated nodules on images reconstructed with AIDR{sup 3D} (p < 0.001). Conclusions: No significant difference in nodule detection performance was demonstrated between images reconstructed with FBP and AIDR{sup 3D}. mAs was found to influence nodule detection, though further work is required for dose optimization.« less

  1. Tree-ring reconstructions of hydroclimatic variability in the Upper Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Hidalgo-Leon, Hugo

    Three major sources of improvements in tree-ring analysis and reconstruction of hydroclimatic variables are presented for the Upper Colorado River Basin (UCRB) in the southwestern U.S.: (1) Cross validation statistics are used for identifying optimal reconstruction models based on different alternatives of PCA-based regression. Results showed that a physically-consistent parsimonious model with low mean square error can be obtained by using strict rules for principal component selection and cross validation statistics. The improved methods were used to produce a ˜500 year high-resolution reconstruction of the UCRB's streamflow and compared with results of a previous reconstruction based on traditional procedures. (2) Tree-species' type was found to be a factor for determining chronology selection from dendrohydroclimatic models. The relative sensitivity of six tree species (Pinus edulis, Pseudotsuga menziesii, Pinus ponderosa, Pinus flexilis, Pinus aristata, and Picea engelmanni) to hydroclimatic extreme variations was determined using contingency table scores of tree-ring growth (at different lags) against hydroclimatic observations. Pinus edulis and Pseudotsuga menziesii were found to be the species most sensitive to low water. Results showed that tree-rings are biased towards greater sensitivity to hot-dry conditions and less responsive to cool-moist conditions. Resulted also showed higher streamflow response scores compared to precipitation implying a good integration and persistence representation of the basin through normal hydrological processes. (3) Previous reconstructions on the basin used data extending only up to 1963. This is an important limitation since hydroclimatic records from 1963 to the present show significantly different variation than prior to 1963. The changes are caused by variations in the strength of forcing mechanisms from the Pacific Ocean. A comparative analysis of the influence of North Pacific variation and El Nino/Southern Oscillation (ENSO) showed that the responses of Tropical and North Pacific forcing in UCRB's hydroclimate are different for annual precipitation and total streamflow and that these relationships have changed at decadal time scales. Furthermore, most of the few tree-rings available up to 1985, present the same shifts as the hydroclimatic variables studied. To capture the full range of variability observed in instrumental data is necessary to collect new tree-ring samples.

  2. Statistical analysis of experimental multifragmentation events in 64Zn+112Sn at 40 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Lin, W.; Zheng, H.; Ren, P.; Liu, X.; Huang, M.; Wada, R.; Chen, Z.; Wang, J.; Xiao, G. Q.; Qu, G.

    2018-04-01

    A statistical multifragmentation model (SMM) is applied to the experimentally observed multifragmentation events in an intermediate heavy-ion reaction. Using the temperature and symmetry energy extracted from the isobaric yield ratio (IYR) method based on the modified Fisher model (MFM), SMM is applied to the reaction 64Zn+112Sn at 40 MeV/nucleon. The experimental isotope distribution and mass distribution of the primary reconstructed fragments are compared without afterburner and they are well reproduced. The extracted temperature T and symmetry energy coefficient asym from SMM simulated events, using the IYR method, are also consistent with those from the experiment. These results strongly suggest that in the multifragmentation process there is a freezeout volume, in which the thermal and chemical equilibrium is established before or at the time of the intermediate-mass fragments emission.

  3. European temperature records of the past five centuries based on documentary information compared to climate simulations

    NASA Astrophysics Data System (ADS)

    Zorita, E.

    2009-09-01

    Two European temperature records for the past half-millennium, January-to-April air temperature for Stockholm (Sweden) and seasonal temperature for a Central European region, both derived from the analysis of documentary sources combined with long instrumental records, are compared with the output of forced (solar, volcanic, greenhouse gases) climate simulations with the model ECHO-G. The analysis is complemented with the long (early)-instrumental record of Central England Temperature (CET). Both approaches to study past climates (simulations and reconstructions) are burdened with uncertainties. The main objective of this comparative analysis is to identify robust features and weaknesses that may help to improve models and reconstruction methods. The results indicate a general agreement between simulations and the reconstructed Stockholm and CET records regarding the long-term temperature trend over the recent centuries, suggesting a reasonable choice of the amplitude of the solar forcing in the simulations and sensitivity of the model to the external forcing. However, the Stockholm reconstruction and the CET record also show a long and clear multi-decadal warm episode peaking around 1730, which is absent in the simulations. The uncertainties associated with the reconstruction method or with the simulated internal climate variability cannot easily explain this difference. Regarding the interannual variability, the Stockholm series displays in some periods higher amplitudes than the simulations but these differences are within the statistical uncertainty and further decrease if output from a regional model driven by the global model is used. The long-term trends in the simulations and reconstructions of the Central European temperature agree less well. The reconstructed temperature displays, for all seasons, a smaller difference between the present climate and past centuries than the simulations. Possible reasons for these differences may be related to a limitation of the traditional technique for converting documentary evidence to temperature values to capture long-term climate changes, because the documents often reflect temperatures relative to the contemporary authors' own perception of what constituted 'normal' conditions. By contrast, the simulated and reconstructed inter-annual variability is in rather good agreement.

  4. WE-AB-207A-08: BEST IN PHYSICS (IMAGING): Advanced Scatter Correction and Iterative Reconstruction for Improved Cone-Beam CT Imaging On the TrueBeam Radiotherapy Machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, A; Paysan, P; Brehm, M

    2016-06-15

    Purpose: To improve CBCT image quality for image-guided radiotherapy by applying advanced reconstruction algorithms to overcome scatter, noise, and artifact limitations Methods: CBCT is used extensively for patient setup in radiotherapy. However, image quality generally falls short of diagnostic CT, limiting soft-tissue based positioning and potential applications such as adaptive radiotherapy. The conventional TrueBeam CBCT reconstructor uses a basic scatter correction and FDK reconstruction, resulting in residual scatter artifacts, suboptimal image noise characteristics, and other artifacts like cone-beam artifacts. We have developed an advanced scatter correction that uses a finite-element solver (AcurosCTS) to model the behavior of photons as theymore » pass (and scatter) through the object. Furthermore, iterative reconstruction is applied to the scatter-corrected projections, enforcing data consistency with statistical weighting and applying an edge-preserving image regularizer to reduce image noise. The combined algorithms have been implemented on a GPU. CBCT projections from clinically operating TrueBeam systems have been used to compare image quality between the conventional and improved reconstruction methods. Planning CT images of the same patients have also been compared. Results: The advanced scatter correction removes shading and inhomogeneity artifacts, reducing the scatter artifact from 99.5 HU to 13.7 HU in a typical pelvis case. Iterative reconstruction provides further benefit by reducing image noise and eliminating streak artifacts, thereby improving soft-tissue visualization. In a clinical head and pelvis CBCT, the noise was reduced by 43% and 48%, respectively, with no change in spatial resolution (assessed visually). Additional benefits include reduction of cone-beam artifacts and reduction of metal artifacts due to intrinsic downweighting of corrupted rays. Conclusion: The combination of an advanced scatter correction with iterative reconstruction substantially improves CBCT image quality. It is anticipated that clinically acceptable reconstruction times will result from a multi-GPU implementation (the algorithms are under active development and not yet commercially available). All authors are employees of and (may) own stock of Varian Medical Systems.« less

  5. TU-A-12A-07: CT-Based Biomarkers to Characterize Lung Lesion: Effects of CT Dose, Slice Thickness and Reconstruction Algorithm Based Upon a Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B; Tan, Y; Tsai, W

    2014-06-15

    Purpose: Radiogenomics promises the ability to study cancer tumor genotype from the phenotype obtained through radiographic imaging. However, little attention has been paid to the sensitivity of image features, the image-based biomarkers, to imaging acquisition techniques. This study explores the impact of CT dose, slice thickness and reconstruction algorithm on measuring image features using a thorax phantom. Methods: Twentyfour phantom lesions of known volume (1 and 2mm), shape (spherical, elliptical, lobular and spicular) and density (-630, -10 and +100 HU) were scanned on a GE VCT at four doses (25, 50, 100, and 200 mAs). For each scan, six imagemore » series were reconstructed at three slice thicknesses of 5, 2.5 and 1.25mm with continuous intervals, using the lung and standard reconstruction algorithms. The lesions were segmented with an in-house 3D algorithm. Fifty (50) image features representing lesion size, shape, edge, and density distribution/texture were computed. Regression method was employed to analyze the effect of CT dose, slice of thickness and reconstruction algorithm on these features adjusting 3 confounding factors (size, density and shape of phantom lesions). Results: The coefficients of CT dose, slice thickness and reconstruction algorithm are presented in Table 1 in the supplementary material. No significant difference was found between the image features calculated on low dose CT scans (25mAs and 50mAs). About 50% texture features were found statistically different between low doses and high doses (100 and 200mAs). Significant differences were found for almost all features when calculated on 1.25mm, 2.5mm, and 5mm slice thickness images. Reconstruction algorithms significantly affected all density-based image features, but not morphological features. Conclusions: There is a great need to standardize the CT imaging protocols for radiogenomics study because CT dose, slice thickness and reconstruction algorithm impact quantitative image features to various degrees as our study has shown.« less

  6. Resistance to compression of weakened roots subjected to different root reconstruction protocols

    PubMed Central

    ZOGHEIB, Lucas Villaça; SAAVEDRA, Guilherme de Siqueira Ferreira Anzaloni; CARDOSO, Paula Elaine; VALERA, Márcia Carneiro; de ARAÚJO, Maria Amélia Máximo

    2011-01-01

    Objective This study evaluated, in vitro, the fracture resistance of human non-vital teeth restored with different reconstruction protocols. Material and methods Forty human anterior roots of similar shape and dimensions were assigned to four groups (n=10), according to the root reconstruction protocol: Group I (control): non-weakened roots with glass fiber post; Group II: roots with composite resin by incremental technique and glass fiber post; Group III: roots with accessory glass fiber posts and glass fiber post; and Group IV: roots with anatomic glass fiber post technique. Following post cementation and core reconstruction, the roots were embedded in chemically activated acrylic resin and submitted to fracture resistance testing, with a compressive load at an angle of 45º in relation to the long axis of the root at a speed of 0.5 mm/min until fracture. All data were statistically analyzed with bilateral Dunnett's test (α=0.05). Results Group I presented higher mean values of fracture resistance when compared with the three experimental groups, which, in turn, presented similar resistance to fracture among each other. None of the techniques of root reconstruction with intraradicular posts improved root strength, and the incremental technique was suggested as being the most recommendable, since the type of fracture that occurred allowed the remaining dental structure to be repaired. Conclusion The results of this in vitro study suggest that the healthy remaining radicular dentin is more important to increase fracture resistance than the root reconstruction protocol. PMID:22231002

  7. 3D widefield light microscope image reconstruction without dyes

    NASA Astrophysics Data System (ADS)

    Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.

    2015-03-01

    3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.

  8. On a full Bayesian inference for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  9. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models.

    PubMed

    van IJsseldijk, E A; Valstar, E R; Stoel, B C; Nelissen, R G H H; Baka, N; Van't Klooster, R; Kaptein, B L

    2016-08-01

    An important measure for the diagnosis and monitoring of knee osteoarthritis is the minimum joint space width (mJSW). This requires accurate alignment of the x-ray beam with the tibial plateau, which may not be accomplished in practice. We investigate the feasibility of a new mJSW measurement method from stereo radiographs using 3D statistical shape models (SSM) and evaluate its sensitivity to changes in the mJSW and its robustness to variations in patient positioning and bone geometry. A validation study was performed using five cadaver specimens. The actual mJSW was varied and images were acquired with variation in the cadaver positioning. For comparison purposes, the mJSW was also assessed from plain radiographs. To study the influence of SSM model accuracy, the 3D mJSW measurement was repeated with models from the actual bones, obtained from CT scans. The SSM-based measurement method was more robust (consistent output for a wide range of input data/consistent output under varying measurement circumstances) than the conventional 2D method, showing that the 3D reconstruction indeed reduces the influence of patient positioning. However, the SSM-based method showed comparable sensitivity to changes in the mJSW with respect to the conventional method. The CT-based measurement was more accurate than the SSM-based measurement (smallest detectable differences 0.55 mm versus 0. 82 mm, respectively). The proposed measurement method is not a substitute for the conventional 2D measurement due to limitations in the SSM model accuracy. However, further improvement of the model accuracy and optimisation technique can be obtained. Combined with the promising options for applications using quantitative information on bone morphology, SSM based 3D reconstructions of natural knees are attractive for further development.Cite this article: E. A. van IJsseldijk, E. R. Valstar, B. C. Stoel, R. G. H. H. Nelissen, N. Baka, R. van't Klooster, B. L. Kaptein. Three dimensional measurement of minimum joint space width in the knee from stereo radiographs using statistical shape models. Bone Joint Res 2016;320-327. DOI: 10.1302/2046-3758.58.2000626. © 2016 van IJsseldijk et al.

  10. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  11. Empirical investigation into depth-resolution of Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, N.; Ogaya, X.

    2017-12-01

    We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.

  12. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    NASA Astrophysics Data System (ADS)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were performed in the PMIP3 project. The proxy data syntheses consist either of raw pollen data or of normally distributed climate data from preprocessed proxy records. Future extensions of our method contain the inclusion of other proxy types (transfer functions), the implementation of other spatial interpolation techniques, the use of age uncertainties, and the extension to spatio-temporal reconstructions of the last deglaciation. Our work is part of the PalMod project funded by the German Federal Ministry of Education and Science (BMBF).

  13. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  14. A single pre-operative antibiotic dose is as effective as continued antibiotic prophylaxis in implant-based breast reconstruction: A matched cohort study.

    PubMed

    Townley, William A; Baluch, Narges; Bagher, Shaghayegh; Maass, Saskia W M C; O'Neill, Anne; Zhong, Toni; Hofer, Stefan O P

    2015-05-01

    Infections following implant-based breast reconstruction can lead to devastating consequences. There is currently no consensus on the need for post-operative antibiotics in preventing immediate infection. This study compared two different methods of infection prevention in this group of patients. A retrospective matched cohort study was performed on consecutive women undergoing implant-based breast reconstruction at University Health Network, Toronto (November 2008-December 2012). All patients received a single pre-operative intravenous antibiotic dose. Group A received minimal interventions and Group B underwent maximal prophylactic measures. Patient (age, smoking, diabetes, co-morbidities), oncologic and procedural variables (timing and laterality) were collected. Univariate and multivariate logistic regression were performed to compare outcomes between the two groups. Two hundred and eight patients underwent 647 implant procedures. After matching the two treatment groups by BMI, 94 patients in each treatment group yielding a total of 605 implant procedures were selected for analysis. The two groups were comparable in terms of patient and disease variables. Post-operative wound infection was similar in Group A (n = 11, 12%) compared with Group B (n = 9, 10%; p = 0.8). Univariate analysis revealed only pre-operative radiotherapy to be associated with the development of infection (0.004). Controlling for the effect of radiotherapy, multivariate analysis demonstrated that there was no statistically significant difference between the two methods for infection prevention. Our findings suggest that a single pre-operative dose of intravenous antibiotics is equally as effective as continued antibiotic prophylaxis in preventing immediate infection in patients undergoing implant-based breast reconstructions. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Optimization of breast reconstruction results using TMG flap in 30 cases: Evaluation of several refinements addressing flap design, shaping techniques, and reduction of donor site morbidity.

    PubMed

    Nickl, Stefanie; Nedomansky, Jakob; Radtke, Christine; Haslik, Werner; Schroegendorfer, Klaus F

    2018-01-31

    The transverse myocutaneous gracilis (TMG) flap is a widely used alternative to abdominal flaps in autologous breast reconstruction. However, secondary procedures for aesthetic refinement are frequently necessary. Herein, we present our experience with an optimized approach in TMG breast reconstruction to enhance aesthetic outcome and to reduce the need for secondary refinements. We retrospectively analyzed 37 immediate or delayed reconstructions with TMG flaps in 34 women, performed between 2009 and 2015. Four patients (5 flaps) constituted the conventional group (non-optimized approach). Thirty patients (32 flaps; modified group) underwent an optimized procedure consisting of modified flap harvesting and shaping techniques and methods utilized to reduce denting after rib resection and to diminish donor site morbidity. Statistically significant fewer secondary procedures (0.6 ± 0.9 versus 4.8 ± 2.2; P < .001) and fewer trips to the OR (0.4 ± 0.7 versus 2.3 ± 1.0 times; P = .001) for aesthetic refinement were needed in the modified group as compared to the conventional group. In the modified group, 4 patients (13.3%) required refinement of the reconstructed breast, 7 patients (23.3%) underwent mastopexy/mammoplasty or lipofilling of the contralateral breast, and 4 patients (13.3%) required refinement of the contralateral thigh. Total flap loss did not occur in any patient. Revision surgery was needed once. Compared to the conventional group, enhanced aesthetic results with consecutive reduction of secondary refinements could be achieved when using our modified flap harvesting and shaping techniques, as well as our methods for reducing contour deformities after rib resection and for overcoming donor site morbidities. © 2017 Wiley Periodicals, Inc.

  16. ReTrOS: a MATLAB toolbox for reconstructing transcriptional activity from gene and protein expression data.

    PubMed

    Minas, Giorgos; Momiji, Hiroshi; Jenkins, Dafyd J; Costa, Maria J; Rand, David A; Finkenstädt, Bärbel

    2017-06-26

    Given the development of high-throughput experimental techniques, an increasing number of whole genome transcription profiling time series data sets, with good temporal resolution, are becoming available to researchers. The ReTrOS toolbox (Reconstructing Transcription Open Software) provides MATLAB-based implementations of two related methods, namely ReTrOS-Smooth and ReTrOS-Switch, for reconstructing the temporal transcriptional activity profile of a gene from given mRNA expression time series or protein reporter time series. The methods are based on fitting a differential equation model incorporating the processes of transcription, translation and degradation. The toolbox provides a framework for model fitting along with statistical analyses of the model with a graphical interface and model visualisation. We highlight several applications of the toolbox, including the reconstruction of the temporal cascade of transcriptional activity inferred from mRNA expression data and protein reporter data in the core circadian clock in Arabidopsis thaliana, and how such reconstructed transcription profiles can be used to study the effects of different cell lines and conditions. The ReTrOS toolbox allows users to analyse gene and/or protein expression time series where, with appropriate formulation of prior information about a minimum of kinetic parameters, in particular rates of degradation, users are able to infer timings of changes in transcriptional activity. Data from any organism and obtained from a range of technologies can be used as input due to the flexible and generic nature of the model and implementation. The output from this software provides a useful analysis of time series data and can be incorporated into further modelling approaches or in hypothesis generation.

  17. A retrospective study: Multivariate logistic regression analysis of the outcomes after pressure sores reconstruction with fasciocutaneous, myocutaneous, and perforator flaps.

    PubMed

    Chiu, Yu-Jen; Liao, Wen-Chieh; Wang, Tien-Hsiang; Shih, Yu-Chung; Ma, Hsu; Lin, Chih-Hsun; Wu, Szu-Hsien; Perng, Cherng-Kang

    2017-08-01

    Despite significant advances in medical care and surgical techniques, pressure sore reconstruction is still prone to elevated rates of complication and recurrence. We conducted a retrospective study to investigate not only complication and recurrence rates following pressure sore reconstruction but also preoperative risk stratification. This study included 181 ulcers underwent flap operations between January 2002 and December 2013 were included in the study. We performed a multivariable logistic regression model, which offers a regression-based method accounting for the within-patient correlation of the success or failure of each flap. The overall complication and recurrence rates for all flaps were 46.4% and 16.0%, respectively, with a mean follow-up period of 55.4 ± 38.0 months. No statistically significant differences of complication and recurrence rates were observed among three different reconstruction methods. In subsequent analysis, albumin ≤3.0 g/dl and paraplegia were significantly associated with higher postoperative complication. The anatomic factor, ischial wound location, significantly trended toward the development of ulcer recurrence. In the fasciocutaneous group, paraplegia had significant correlation to higher complication and recurrence rates. In the musculocutaneous flap group, variables had no significant correlation to complication and recurrence rates. In the free-style perforator group, ischial wound location and malnourished status correlated with significantly higher complication rates; ischial wound location also correlated with significantly higher recurrence rate. Ultimately, our review of a noteworthy cohort with lengthy follow-up helped identify and confirm certain risk factors that can facilitate a more informed and thoughtful pre- and postoperative decision-making process for patients with pressure ulcers. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  18. Optimal structure and parameter learning of Ising models

    DOE PAGES

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...

    2018-03-16

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  19. Optimal structure and parameter learning of Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant

    Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less

  20. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  1. Facial Soft Tissue Thickness of Midline in an Iranian Sample: MRI Study.

    PubMed

    Johari, Masume; Esmaeili, Farzad; Hamidi, Hadi

    2017-01-01

    To identify human skeletal remains, different methods can be used and using these techniques, important data can be obtained. However, facial reconstruction is the last method to indentify unknown human faces which requires knowledge about facial soft tissue thickness in the different positions of the face. The present study determined the facial soft tissue thickness in the different landmark points on the MRI images of patients referred to Radiology Department of Shahid Madani Hospital. In this descriptive cross-sectional trial, MRI images of 179 patients (61 males, 118 females) in the age range of 18-76 years old who did not show any pathologic lesions, were selected. The measurements of the facial soft tissue were done on 12 landmark points on the midline area by two radiologist observers using specific software on the images. The differences in the soft tissue thickness in these landmark points were statistically analyzed by Mann-Whitney U (in term of gender) and Kruskal-Wallis tests (in terms of Body Mass Index [BMI] and age groups). P value less than 0.05 was considered statistically significant. The data were compared with the results of other studies. The results obtained in the present study were higher than Turkish and American studies in most of the landmark points. Facial soft tissue thickness in most of the landmarks was more in males than females. In some of the landmarks, significant differences were found between emaciated, normal and overweight patients while in most cases, soft tissue thickness increased with the increased BMI. In some cases, significant differences were noted between soft tissue thickness values among the different age groups, in which the thickness increased or thinned with the increased age. There were statistically significant associations between the presence and surface area of Haller cells and the occurrence of ipsilateral maxillary sinusitis. Neither the angulation of the uncinate process nor the size of the maxillary sinus ostium significantly correlates with the formation of maxillary sinusitis. The data achieved in the present study can be used for the facial reconstruction purposes in the Iranian population; however, the slight differences existing between the studied population and other subgroup races must be considered for accurate reconstructions.

  2. Statistical image-domain multimaterial decomposition for dual-energy CT.

    PubMed

    Xue, Yi; Ruan, Ruoshui; Hu, Xiuhua; Kuang, Yu; Wang, Jing; Long, Yong; Niu, Tianye

    2017-03-01

    Dual-energy CT (DECT) enhances tissue characterization because of its basis material decomposition capability. In addition to conventional two-material decomposition from DECT measurements, multimaterial decomposition (MMD) is required in many clinical applications. To solve the ill-posed problem of reconstructing multi-material images from dual-energy measurements, additional constraints are incorporated into the formulation, including volume and mass conservation and the assumptions that there are at most three materials in each pixel and various material types among pixels. The recently proposed flexible image-domain MMD method decomposes pixels sequentially into multiple basis materials using a direct inversion scheme which leads to magnified noise in the material images. In this paper, we propose a statistical image-domain MMD method for DECT to suppress the noise. The proposed method applies penalized weighted least-square (PWLS) reconstruction with a negative log-likelihood term and edge-preserving regularization for each material. The statistical weight is determined by a data-based method accounting for the noise variance of high- and low-energy CT images. We apply the optimization transfer principles to design a serial of pixel-wise separable quadratic surrogates (PWSQS) functions which monotonically decrease the cost function. The separability in each pixel enables the simultaneous update of all pixels. The proposed method is evaluated on a digital phantom, Catphan©600 phantom and three patients (pelvis, head, and thigh). We also implement the direct inversion and low-pass filtration methods for a comparison purpose. Compared with the direct inversion method, the proposed method reduces noise standard deviation (STD) in soft tissue by 95.35% in the digital phantom study, by 88.01% in the Catphan©600 phantom study, by 92.45% in the pelvis patient study, by 60.21% in the head patient study, and by 81.22% in the thigh patient study, respectively. The overall volume fraction accuracy is improved by around 6.85%. Compared with the low-pass filtration method, the root-mean-square percentage error (RMSE(%)) of electron densities in the Catphan©600 phantom is decreased by 20.89%. As modulation transfer function (MTF) magnitude decreased to 50%, the proposed method increases the spatial resolution by an overall factor of 1.64 on the digital phantom, and 2.16 on the Catphan©600 phantom. The overall volume fraction accuracy is increased by 6.15%. We proposed a statistical image-domain MMD method using DECT measurements. The method successfully suppresses the magnified noise while faithfully retaining the quantification accuracy and anatomical structure in the decomposed material images. The proposed method is practical and promising for advanced clinical applications using DECT imaging. © 2017 American Association of Physicists in Medicine.

  3. Improving the light quantification of near infrared (NIR) diffused light optical tomography with ultrasound localization

    NASA Astrophysics Data System (ADS)

    Ardeshirpour, Yasaman

    According to the statistics published by the American Cancer Society, currently breast cancer is the second most common cancer after skin cancer and the second cause of cancer death after lung cancer in the female population. Diffuse optical tomography (DOT) using near-infrared (NIR) light, guided by ultrasound localization, has shown great promise in distinguishing benign from malignant breast tumors and in assessing the response of breast cancer to chemotherapy. Our ultrasound-guided DOT system is based on reflection geometry, with patients scanned in supine position using a hand-held probe. For patients with chest-wall located at a depth shallower than 1 to 2cm, as in about 10% of our clinical cases, the semi-infinite imaging medium is not a valid assumption and the chest-wall effect needs to be considered in the imaging reconstruction procedure. In this dissertation, co-registered ultrasound images were used to model the breast-tissue and chest-wall as a two-layer medium. The effect of the chest wall on breast lesion reconstruction was systematically investigated. The performance of the two-layer model-based reconstruction, using the Finite Element Method, was evaluated by simulation, phantom experiments and clinical studies. The results show that the two-layer model can improve the accuracy of estimated background optical properties, the reconstructed absorption map and the total hemoglobin concentration of the lesion. For patients' data affected by chest wall, the perturbation, which is the difference between measurements obtained at lesion and normal reference sites, may include the information of background mismatch between these two sites. Because the imaging reconstruction is based on the perturbation approach, the effect of this mismatch between the optical properties at the two sites on reconstructed optical absorption was studied and a guideline for imaging procedure was developed to reduce these effects during data capturing. To reduce the artifacts caused by the background mismatch between the lesion and reference sites, two solutions were introduced. The first solution uses a model-based approach and the second method uses an exogenous contrast agent. The results of phantom and animal studies show that both methods can significantly reduce artifacts generated by the background mismatch.

  4. Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle

    NASA Astrophysics Data System (ADS)

    Huh, Lynn

    The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.

  5. Postmastectomy reconstruction: comparative analysis of the psychosocial, functional, and cosmetic effects of transverse rectus abdominis musculocutaneous flap versus breast implant reconstruction.

    PubMed

    Cederna, P S; Yates, W R; Chang, P; Cram, A E; Ricciardelli, E J

    1995-11-01

    Over 40,000 postmastectomy breast reconstructions are performed annually. In this study, we investigated the psychosocial, functional, and cosmetic effects of transverse rectus abdominis musculocutaneous (TRAM) flap versus breast implant reconstruction. Thirty-three women who had undergone postmastectomy breast reconstruction were contacted by telephone and agreed to participate in the study. Twenty-two women completed the self-assessment questionnaires regarding their quality of life, psychological symptoms, functional status, body image, and global satisfaction. The TRAM and implant groups contained 8 and 14 patients, respectively. The groups were well matched for age, employment status, marital status, race, religion, and severity of medical and surgical illnesses. The average follow-up was 36 months. Statistical analysis of the responses revealed that women who had undergone TRAM flap reconstruction were more satisfied with how their reconstructed breast felt to the touch (p = .01), and there was a trend toward greater satisfaction with the appearance of their reconstructed breast (p = .08). However, these same patients identified more difficulties as far as functioning at work or school, performing vigorous physical activities, participating in community or religious activities, visiting with relatives, and interacting with male friends (p < .04). There were no statistically significant differences in body image or overall satisfaction. In this small cohort study, both the TRAM flap group and the implant group were satisfied with the results of their breast reconstruction, but the TRAM flap group was more satisfied with how their breast felt and tended to be more satisfied with the cosmetic result. The TRAM flap group reported greater psychological, social, and physical impairments as a result of their reconstruction.

  6. Removing the Impact of Correlated PSF Uncertainties in Weak Lensing

    NASA Astrophysics Data System (ADS)

    Lu, Tianhuan; Zhang, Jun; Dong, Fuyu; Li, Yingke; Liu, Dezi; Fu, Liping; Li, Guoliang; Fan, Zuhui

    2018-05-01

    Accurate reconstruction of the spatial distributions of the point-spread function (PSF) is crucial for high precision cosmic shear measurements. Nevertheless, current methods are not good at recovering the PSF fluctuations of high spatial frequencies. In general, the residual PSF fluctuations are spatially correlated, and therefore can significantly contaminate the correlation functions of the weak lensing signals. We propose a method to correct for this contamination statistically, without any assumptions on the PSF and galaxy morphologies or their spatial distribution. We demonstrate our idea with the data from the W2 field of CFHTLenS.

  7. A new method for imaging nuclear threats using cosmic ray muons

    NASA Astrophysics Data System (ADS)

    Morris, C. L.; Bacon, Jeffrey; Borozdin, Konstantin; Miyadera, Haruo; Perry, John; Rose, Evan; Watson, Scott; White, Tim; Aberle, Derek; Green, J. Andrew; McDuff, George G.; Lukić, Zarija; Milner, Edward C.

    2013-08-01

    Muon tomography is a technique that uses cosmic ray muons to generate three dimensional images of volumes using information contained in the Coulomb scattering of the muons. Advantages of this technique are the ability of cosmic rays to penetrate significant overburden and the absence of any additional dose delivered to subjects under study above the natural cosmic ray flux. Disadvantages include the relatively long exposure times and poor position resolution and complex algorithms needed for reconstruction. Here we demonstrate a new method for obtaining improved position resolution and statistical precision for objects with spherical symmetry.

  8. A new method for imaging nuclear threats using cosmic ray muons

    DOE PAGES

    Morris, C. L.; Bacon, Jeffrey; Borozdin, Konstantin; ...

    2013-08-29

    Muon tomography is a technique that uses cosmic ray muons to generate three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Advantages of this technique are the ability of cosmic rays to penetrate significant overburden and the absence of any additional dose delivered to subjects under study beyond the natural cosmic ray flux. Disadvantages include the relatively long exposure times and poor position resolution and complex algorithms needed for reconstruction. Furthermore, we demonstrate a new method for obtaining improved position resolution and statistical precision for objects with spherical symmetry.

  9. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  10. A Data-Model Comparison over Europe using a new 2000-yr Summer Temperature Reconstruction from the PAGES 2k Regional Network and Last-Millennium GCM Simulations

    NASA Astrophysics Data System (ADS)

    Smerdon, Jason; Werner, Johannes; Fernandez-Donado, Laura; Buntgen, Ulf; Charpentier Ljungqvist, Fredrik; Esper, Jan; Fidel Gonzalez-Rouco, J.; Luterbacher, Juerg; McCarroll, Danny; Wagner, Sebastian; Wahl, Eugene; Wanner, Heinz; Zorita, Eduardo

    2013-04-01

    A new reconstruction of European summer (JJA) land temperatures is presented and compared to 37 forced transient simulations of the last millennium from coupled General Circulation Models (CGCMs). The reconstructions are derived from eleven annually resolved tree-ring and documentary records from ten European countries/regions, compiled as part of the Euro_Med working group contribution to the PAGES 2k Regional Network. Records were selected based upon their summer temperature signal, annual resolution, and time-continuous sampling. All tree-ring data were detrended using the Regional Curve Standardization (RCS) method to retain low-frequency variance in the resulting mean chronologies. A nested Composite-Plus-Scale (CPS) mean temperature reconstruction extending from 138 B.C.E. to 2003 C.E. was derived using nine nests reflecting the availability of predictors back in time. Each nest was calculated using a weighted composite based on the correlation of each proxy with the CRUTEM4v mean European JJA land temperature (35°-70°N, 10°W-40°E). The CPS methodology was implemented using a sliding calibration period, initially extending from 1850-1953 C.E. and incrementing by one year until reaching the final period of 1900-2003 C.E. Within each calibration step, the 50 years excluded from calibration were used for validation. Validation statistics across all reconstruction ensemble members within each nest indicate skillful reconstructions (RE: 0.42-0.64; CE: 0.26-0.54) and are all above the maximum validation statistics achieved in an ensemble of red noise benchmarking experiments. A gridded (5°x5°) European summer (JJA) temperature reconstruction back to 750 C.E. was derived using Bayesian inference together with a localized stochastic description of the underlying processes. Instrumental data are JJA means from the 5° European land grid cells in the CRUTEM4v dataset. Predictive experiments using the full proxy data were made, resulting in a multivariate distribution of temperature reconstructions from 750-2003 C.E. The mean of this distribution is the optimal estimate of the gridded JJA temperature anomalies and its width provides objective reconstruction uncertainties. The derived reconstruction is compared to withheld instrumental and proxy data to evaluate reconstruction skill on decadal-to-centennial time scales. A comparison between the mean Bayesian and CPS reconstructions indicates remarkable agreement, with a correlation during their period of overlap of 0.95. In both the Bayesian and CPS reconstructions, warm periods during the 1st, 2nd, and 7th-12th centuries compare to similar warm summer temperatures during the mid 20th century, although the 2003 summer remains the warmest single summer over the duration of the reconstructions. A relative period of cold summer temperatures is also noted from the 14th-19th centuries, consistent with the expected timing of the Little Ice Age. Comparisons between the reconstructions and the 37-member ensemble of millennium-length forced transient simulations from CGCMs, including eleven simulations from the collection of CMIP5/PMIP3 last-millennium experiments, indicate good regional agreement between reconstructions and models. Based on the separation of simulations into strong or weak scaling of total solar irradiance (TSI) forcing over the last millennium, there is some evidence that there is better agreement with the ensemble using strong TSI as forcing.

  11. Scale dependence of galaxy biasing investigated by weak gravitational lensing: An assessment using semi-analytic galaxies and simulated lensing data

    NASA Astrophysics Data System (ADS)

    Simon, Patrick; Hilbert, Stefan

    2018-05-01

    Galaxies are biased tracers of the matter density on cosmological scales. For future tests of galaxy models, we refine and assess a method to measure galaxy biasing as a function of physical scale k with weak gravitational lensing. This method enables us to reconstruct the galaxy bias factor b(k) as well as the galaxy-matter correlation r(k) on spatial scales between 0.01 h Mpc-1 ≲ k ≲ 10 h Mpc-1 for redshift-binned lens galaxies below redshift z ≲ 0.6. In the refinement, we account for an intrinsic alignment of source ellipticities, and we correct for the magnification bias of the lens galaxies, relevant for the galaxy-galaxy lensing signal, to improve the accuracy of the reconstructed r(k). For simulated data, the reconstructions achieve an accuracy of 3-7% (68% confidence level) over the above k-range for a survey area and a typical depth of contemporary ground-based surveys. Realistically the accuracy is, however, probably reduced to about 10-15%, mainly by systematic uncertainties in the assumed intrinsic source alignment, the fiducial cosmology, and the redshift distributions of lens and source galaxies (in that order). Furthermore, our reconstruction technique employs physical templates for b(k) and r(k) that elucidate the impact of central galaxies and the halo-occupation statistics of satellite galaxies on the scale-dependence of galaxy bias, which we discuss in the paper. In a first demonstration, we apply this method to previous measurements in the Garching-Bonn Deep Survey and give a physical interpretation of the lens population.

  12. Characterization of adaptive statistical iterative reconstruction (ASIR) in low contrast helical abdominal imaging via a transfer function based method

    NASA Astrophysics Data System (ADS)

    Zhang, Da; Li, Xinhua; Liu, Bob

    2012-03-01

    Since the introduction of ASiR, its potential in noise reduction has been reported in various clinical applications. However, the influence of different scan and reconstruction parameters on the trade off between ASiR's blurring effect and noise reduction in low contrast imaging has not been fully studied. Simple measurements on low contrast images, such as CNR or phantom scores could not explore the nuance nature of this problem. We tackled this topic using a method which compares the performance of ASiR in low contrast helical imaging based on an assumed filter layer on top of the FBP reconstruction. Transfer functions of this filter layer were obtained from the noise power spectra (NPS) of corresponding FBP and ASiR images that share the same scan and reconstruction parameters. 2D transfer functions were calculated as sqrt[NPSASiR(u, v)/NPSFBP(u, v)]. Synthesized ACR phantom images were generated by filtering the FBP images with the transfer functions of specific (FBP, ASiR) pairs, and were compared with the ASiR images. It is shown that the transfer functions could predict the deterministic blurring effect of ASiR on low contrast objects, as well as the degree of noise reductions. Using this method, the influence of dose, scan field of view (SFOV), display field of view (DFOV), ASiR level, and Recon Mode on the behavior of ASiR in low contrast imaging was studied. It was found that ASiR level, dose level, and DFOV play more important roles in determining the behavior of ASiR than the other two parameters.

  13. Mandible reconstruction with free fibula flaps: Outcome of a cost-effective individual planning concept compared with virtual surgical planning.

    PubMed

    Rommel, Niklas; Kesting, Marco Rainer; Rohleder, Nils Hagen; Bauer, Florian Martin Josef; Wolff, Klaus-Dietrich; Weitz, Jochen

    2017-08-01

    The free osteomyocutaneous fibular flap has become one of the primary options for mandibular reconstruction, because of the later introduction and development of virtual surgical planning (VSP). However, VSP is associated with high additional pre-operative effort and costs. Therefore, the purpose of the study was to develop a new individual cost-effective pre-operative planning concept for free fibula mandible reconstruction and to compare it with VSP regarding clinical parameters and post-operative outcome. 31 patients undergoing mandibular reconstruction with a microvascular free fibular flap were divided into two groups and retrospectively reviewed. For the first group A (18 of 31 patients), an individual method with stererolithographic (STL) models, silicon templates and hand-made cutting guides was used (about 250 € planning costs/patient). For the second group B (13 of 31 patients), VSP including pre-fabricated cutting guides was used (about 2500 € planning costs/patient). We found no statistically significant differences with respect to intra-operative time of mandibular reconstruction, duration of hospitalisation or post-operative complications between the two groups (p ≥ 0.05). The surgical outcomes and operative efficiency of this individual and cost-effective planning concept are comparable with the much more expensive complete VSP concept. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Personality Traits and Decision on Breast Reconstruction in Women after Mastectomy.

    PubMed

    Miśkiewicz, Halina; Antoszewski, Bogusław; Iljin, Aleksandra

    2016-09-01

    The aim of the study was evaluation of the correlation between selected personality traits in women after mastectomy and their decision on breast reconstruction. The study was conducted between 2013‑2015, in the Department of Plastic, Reconstructive and Aesthetic Surgery, Medical University of Lodz, and Department of Oncological and Breast Surgery, CZMP. Comparative analysis comprised 40 patients, in whom mastectomy and breast reconstruction was done, and 40 women after breast amputation, who did not undergo reconstructive surgery. Basing on self-constructed questionnaire, five features of personality were evaluated in these women: pursue of success in life, ability to motivate others, openness to other people, impact of belonging to a social group on sense of security and the importance of opinion of others about the respondent. Apart from the questionnaire, in both groups of women a psychologic tool was used (SUPIN S30 and C30 tests) to determine the intensity of positive and negative emotions. Women who did not choose the reconstructive option were statistically significantly older at mastectomy than women who underwent breast reconstruction. There were statistically significant differences between both groups in response to question on being open to other people and value of other people's opinion. The differences in responses to question on the impact of belonging to a social group on personal sense of safety were hardly statistically significant. In psychometric studies there were significant differences in responses to SUPIN C30 test for negative emotions and S-30 for positive emotions. The level of negative emotions - feature of group A was in 47.5% in the range of high scores and in 47.5% within low and low-average scores. Among women from group B 57.5% had high scores, while 37.5% low and average scores. There were significant differences in the results of positive emotions evaluation in S-30. Women who did not undergo breast reconstruction usually had high scores, while those who decided on reconstructive surgery usually had low scores and low-high scores. 1. The decision on breast reconstruction after mastectomy is connected with personality features of patients. Introvert women, who base their self-opinion on opinion of others and their sense of security on belonging to a social group, rarely choose to undergo breast reconstruction. 2. Younger patients after mastectomy more frequently choose the breast reconstructive option. 3. A special algorithm of medical and psychological care in patients after mastectomy should be created to improve their further quality of life.

  15. Line-of-sight effects in strong lensing: putting theory into practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birrer, Simon; Welschen, Cyril; Amara, Adam

    2017-04-01

    We present a simple method to accurately infer line of sight (LOS) integrated lensing effects for galaxy scale strong lens systems through image reconstruction. Our approach enables us to separate weak lensing LOS effects from the main strong lens deflector. We test our method using mock data and show that strong lens systems can be accurate probes of cosmic shear with a precision on the shear terms of ± 0.003 (statistical error) for an HST-like dataset. We apply our formalism to reconstruct the lens COSMOS 0038+4133 and its LOS. In addition, we estimate the LOS properties with a halo-rendering estimatemore » based on the COSMOS field galaxies and a galaxy-halo connection. The two approaches are independent and complementary in their information content. We find that when estimating the convergence at the strong lens system, performing a joint analysis improves the measure by a factor of two compared to a halo model only analysis. Furthermore the constraints of the strong lens reconstruction lead to tighter constraints on the halo masses of the LOS galaxies. Joint constraints of multiple strong lens systems may add valuable information to the galaxy-halo connection and may allow independent weak lensing shear measurement calibrations.« less

  16. Preclinical evaluation of parametric image reconstruction of [18F]FMISO PET: correlation with ex vivo immunohistochemistry

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu

    2014-01-01

    Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.

  17. MODEL-FREE MULTI-PROBE LENSING RECONSTRUCTION OF CLUSTER MASS PROFILES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umetsu, Keiichi

    2013-05-20

    Lens magnification by galaxy clusters induces characteristic spatial variations in the number counts of background sources, amplifying their observed fluxes and expanding the area of sky, the net effect of which, known as magnification bias, depends on the intrinsic faint-end slope of the source luminosity function. The bias is strongly negative for red galaxies, dominated by the geometric area distortion, whereas it is mildly positive for blue galaxies, enhancing the blue counts toward the cluster center. We generalize the Bayesian approach of Umetsu et al. for reconstructing projected cluster mass profiles, by incorporating multiple populations of background sources for magnification-biasmore » measurements and combining them with complementary lens-distortion measurements, effectively breaking the mass-sheet degeneracy and improving the statistical precision of cluster mass measurements. The approach can be further extended to include strong-lensing projected mass estimates, thus allowing for non-parametric absolute mass determinations in both the weak and strong regimes. We apply this method to our recent CLASH lensing measurements of MACS J1206.2-0847, and demonstrate how combining multi-probe lensing constraints can improve the reconstruction of cluster mass profiles. This method will also be useful for a stacked lensing analysis, combining all lensing-related effects in the cluster regime, for a definitive determination of the averaged mass profile.« less

  18. Fast alignment-free sequence comparison using spaced-word frequencies.

    PubMed

    Leimeister, Chris-Andre; Boden, Marcus; Horwege, Sebastian; Lindner, Sebastian; Morgenstern, Burkhard

    2014-07-15

    Alignment-free methods for sequence comparison are increasingly used for genome analysis and phylogeny reconstruction; they circumvent various difficulties of traditional alignment-based approaches. In particular, alignment-free methods are much faster than pairwise or multiple alignments. They are, however, less accurate than methods based on sequence alignment. Most alignment-free approaches work by comparing the word composition of sequences. A well-known problem with these methods is that neighbouring word matches are far from independent. To reduce the statistical dependency between adjacent word matches, we propose to use 'spaced words', defined by patterns of 'match' and 'don't care' positions, for alignment-free sequence comparison. We describe a fast implementation of this approach using recursive hashing and bit operations, and we show that further improvements can be achieved by using multiple patterns instead of single patterns. To evaluate our approach, we use spaced-word frequencies as a basis for fast phylogeny reconstruction. Using real-world and simulated sequence data, we demonstrate that our multiple-pattern approach produces better phylogenies than approaches relying on contiguous words. Our program is freely available at http://spaced.gobics.de/. © The Author 2014. Published by Oxford University Press.

  19. Assessment of interpatient heterogeneity in tumor radiosensitivity for nonsmall cell lung cancer using tumor-volume variation data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, Alexei V., E-mail: chvetsov2@gmail.com; Schwartz, Jeffrey L.; Mayr, Nina

    2014-06-15

    Purpose: In our previous work, the authors showed that a distribution of cell surviving fractionsS{sub 2} in a heterogeneous group of patients could be derived from tumor-volume variation curves during radiotherapy for head and neck cancer. In this research study, the authors show that this algorithm can be applied to other tumors, specifically in nonsmall cell lung cancer. This new application includes larger patient volumes and includes comparison of data sets obtained at independent institutions. Methods: Our analysis was based on two data sets of tumor-volume variation curves for heterogeneous groups of 17 patients treated for nonsmall cell lung cancermore » with conventional dose fractionation. The data sets were obtained previously at two independent institutions by using megavoltage computed tomography. Statistical distributions of cell surviving fractionsS{sub 2} and clearance half-lives of lethally damaged cells T{sub 1/2} have been reconstructed in each patient group by using a version of the two-level cell population model of tumor response and a simulated annealing algorithm. The reconstructed statistical distributions of the cell surviving fractions have been compared to the distributions measured using predictive assays in vitro. Results: Nonsmall cell lung cancer presents certain difficulties for modeling surviving fractions using tumor-volume variation curves because of relatively large fractional hypoxic volume, low gradient of tumor-volume response, and possible uncertainties due to breathing motion. Despite these difficulties, cell surviving fractionsS{sub 2} for nonsmall cell lung cancer derived from tumor-volume variation measured at different institutions have similar probability density functions (PDFs) with mean values of 0.30 and 0.43 and standard deviations of 0.13 and 0.18, respectively. The PDFs for cell surviving fractions S{sub 2} reconstructed from tumor volume variation agree with the PDF measured in vitro. Conclusions: The data obtained in this work, when taken together with the data obtained previously for head and neck cancer, suggests that the cell surviving fractionsS{sub 2} can be reconstructed from the tumor volume variation curves measured during radiotherapy with conventional fractionation. The proposed method can be used for treatment evaluation and adaptation.« less

  20. Physical reconstruction of packed beds and their morphological analysis: core-shell packings as an example.

    PubMed

    Bruns, Stefan; Tallarek, Ulrich

    2011-04-08

    We report a fast, nondestructive, and quantitative approach to characterize the morphology of packed beds of fine particles by their three-dimensional reconstruction from confocal laser scanning microscopy images, exemplarily shown for a 100μm i.d. fused-silica capillary packed with 2.6μm-sized core-shell particles. The presented method is generally applicable to silica-based capillary columns, monolithic or particulate, and comprises column pretreatment, image acquisition, image processing, and statistical analysis of the image data. It defines a unique platform for fundamental comparisons of particulate and monolithic supports using the statistical measures derived from their reconstructions. Received morphological data are column cross-sectional porosity profiles and chord length distributions from the interparticle macropore space, which are a descriptor of local density and can be characterized by a simplified k-gamma distribution. This distribution function provides a parameter of location and a parameter of dispersion which can be correlated to individual chromatographic band broadening processes (i.e., to transchannel and short-range interchannel contributions to eddy dispersion, respectively). Together with the transcolumn porosity profile the presented approach allows to analyze and quantify the packing microstructure from pore to column scale and therefore holds great promise in a comparative study of packing conditions and particle properties, particularly for characterizing and minimizing the packing process-specific heterogeneities in the final bed structure. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Efficacy of double mirrored omega pattern for skin sparing mastectomy to reduce ischemic complications.

    PubMed

    Santanelli di Pompeo, Fabio; Sorotos, Michail; Laporta, Rosaria; Pagnoni, Marco; Longo, Benedetto

    2018-02-01

    Excellent cosmetic results from skin-sparing mastectomy (SSM) are often impaired by skin flaps' necrosis (SFN), from 8%-25% or worse in smokers. This study prospectively investigated the efficacy of Double-Mirrored Omega Pattern (DMOP-SSM) compared to Wise Pattern SSM (WP-SSM) for immediate reconstruction in moderate/large-breasted smokers. From 2008-2010, DMOP-SSM was performed in 51 consecutive immediate breast reconstructions on 41 smokers (mean age = 49.8 years) with moderate/large and ptotic breasts. This active group (AG) was compared to a similar historical control group (CG) of 37 smokers (mean age = 51.1 years) who underwent WP-SSM and immediate breast reconstruction, with a mean follow-up of 37.6 months. Skin ischaemic complications, number of surgical revisions, time to wound healing, and patient satisfaction were analysed. Descriptive statistics were reported and comparison of performance endpoints was performed using Fisher's exact test and Mann-Whitney U-test. A p-value <.05 was considered significant. Patients' mean age (p = .316) and BMI (p = .215) were not statistically different between groups. Ischaemic complications occurred in 11.7% of DMOP-SSMs and in 32.4% of WP-SSMs (p = .017), and revision rates were, respectively, 5.8% and 24.3% (p = .012), both statistically significant. Mean time to wound healing was, respectively, 16.8 days and 18.4 days (p = .205). Mean patients' satisfaction scores were, respectively, 18.9 and 21.1, statistically significant (p = .022). Although tobacco use in moderate/large breasted patients can severely impair outcomes of breast reconstruction, the DMOP-SSM approach, compared to WP-SSM, allows smokers to benefit from SSM, but with statistically significant reduced skin flaps ischaemic complications, revision surgery, and better cosmetic outcomes.

  2. Effect of Predraft Ulnar Collateral Ligament Reconstruction on Future Performance in Professional Baseball: A Matched Cohort Comparison.

    PubMed

    Camp, Christopher L; Conte, Stan; D'Angelo, John; Fealy, Stephen A; Ahmad, Christopher S

    2018-05-01

    In recent years, there has been a dramatic rise in the annual number of ulnar collateral ligament (UCL) reconstructions performed in amateur baseball pitchers. Accordingly, increasing numbers of players are entering professional baseball having already undergone the procedure; however, the effect of prior UCL reconstruction on future success remains unknown. (1) To provide an epidemiologic report on baseball players who undergo UCL reconstruction before being selected in the Major League Baseball (MLB) Draft, (2) to define the outcomes in terms of statistical performance, and (3) to compare these results with those of matched controls (ie, non-UCL reconstruction). Cohort study; Level of evidence, 3. The MLB Amateur Draft Database was queried to identify all drafted pitchers who underwent UCL reconstruction before being drafted. For each pitcher drafted from 2005 to 2014 with prior UCL reconstruction, 3 healthy controls with no history of elbow surgery were randomly identified for matched analysis. A number of demographic and performance comparisons were made between these groups. A total of 345 pitchers met inclusion criteria. The annual number of pitchers undergoing predraft UCL reconstructions rose steadily from 2005 to 2016 ( P < .001). For matched control analysis, 252 pitchers with a UCL reconstruction and a minimum 2-year follow-up (drafted between 2005 and 2014) were matched to 756 controls (non-UCL reconstruction). As compared with the non-UCL reconstruction group, pitchers who underwent predraft UCL reconstruction reached the MLB level with greater frequency (20% vs 12%, P = .003), and their MLB statistical performances were similar for all measures. Compared with all other pitchers drafted during that period, players who had a predraft UCL reconstruction demonstrated an increased likelihood of reaching progressive levels of play (Full Season A, AA, and MLB) within a given time frame ( P < .05 for all). The number of UCL reconstructions performed in amateur baseball players before the draft increased year over year for the entire study period. Professional pitchers who underwent UCL reconstruction as amateurs appear to perform at least as well as, if not better than, matched controls without elbow surgery.

  3. Isokinetic Testing in Evaluation Rehabilitation Outcome After ACL Reconstruction.

    PubMed

    Cvjetkovic, Dragana Dragicevic; Bijeljac, Sinisa; Palija, Stanislav; Talic, Goran; Radulovic, Tatjana Nozica; Kosanovic, Milkica Glogovac; Manojlovic, Slavko

    2015-02-01

    Numerous rehab protocols have been used in rehabilitation after ACL reconstruction. Isokinetic testing is an objective way to evaluate dynamic stability of the knee joint that estimates the quality of rehabilitation outcome after ACL reconstruction. Our investigation goal was to show importance of isokinetic testing in evaluation thigh muscle strength in patients which underwent ACL reconstruction and rehabilitation protocol. In prospective study, we evaluated 40 subjects which were divided into two groups. Experimental group consisted of 20 recreational males which underwent ACL reconstruction with hamstring tendon and rehabilitation protocol 6 months before isokinetic testing. Control group (20 subjects) consisted of healthy recreational males. In all subjects knee muscle testing was performed on a Biodex System 4 Pro isokinetic dynamo-meter et velocities of 60°/s and 180°/s. We followed average peak torque to body weight (PT/BW) and classic H/Q ratio. In statistical analysis Student's T test was used. There were statistically significant differences between groups in all evaluated parameters except of the mean value of PT/BW of the quadriceps et velocity of 60°/s (p>0.05). Isokinetic testing of dynamic stabilizers of the knee is need in diagnostic and treatment thigh muscle imbalance. We believe that isokinetic testing is an objective parameter for return to sport activities after ACL reconstruction.

  4. CT coronary angiography: impact of adapted statistical iterative reconstruction (ASIR) on coronary stenosis and plaque composition analysis.

    PubMed

    Fuchs, Tobias A; Fiechter, Michael; Gebhard, Cathérine; Stehli, Julia; Ghadri, Jelena R; Kazakauskaite, Egle; Herzog, Bernhard A; Husmann, Lars; Gaemperli, Oliver; Kaufmann, Philipp A

    2013-03-01

    To assess the impact of adaptive statistical iterative reconstruction (ASIR) on coronary plaque volume and composition analysis as well as on stenosis quantification in high definition coronary computed tomography angiography (CCTA). We included 50 plaques in 29 consecutive patients who were referred for the assessment of known or suspected coronary artery disease (CAD) with contrast-enhanced CCTA on a 64-slice high definition CT scanner (Discovery HD 750, GE Healthcare). CCTA scans were reconstructed with standard filtered back projection (FBP) with no ASIR (0 %) or with increasing contributions of ASIR, i.e. 20, 40, 60, 80 and 100 % (no FBP). Plaque analysis (volume, components and stenosis degree) was performed using a previously validated automated software. Mean values for minimal diameter and minimal area as well as degree of stenosis did not change significantly using different ASIR reconstructions. There was virtually no impact of reconstruction algorithms on mean plaque volume or plaque composition (e.g. soft, intermediate and calcified component). However, with increasing ASIR contribution, the percentage of plaque volume component between 401 and 500 HU decreased significantly (p < 0.05). Modern image reconstruction algorithms such as ASIR, which has been developed for noise reduction in latest high resolution CCTA scans, can be used reliably without interfering with the plaque analysis and stenosis severity assessment.

  5. Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals

    PubMed Central

    Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo

    2012-01-01

    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local. PMID:22927808

  6. Statistical analysis and data mining of digital reconstructions of dendritic morphologies.

    PubMed

    Polavaram, Sridevi; Gillette, Todd A; Parekh, Ruchi; Ascoli, Giorgio A

    2014-01-01

    Neuronal morphology is diverse among animal species, developmental stages, brain regions, and cell types. The geometry of individual neurons also varies substantially even within the same cell class. Moreover, specific histological, imaging, and reconstruction methodologies can differentially affect morphometric measures. The quantitative characterization of neuronal arbors is necessary for in-depth understanding of the structure-function relationship in nervous systems. The large collection of community-contributed digitally reconstructed neurons available at NeuroMorpho.Org constitutes a "big data" research opportunity for neuroscience discovery beyond the approaches typically pursued in single laboratories. To illustrate these potential and related challenges, we present a database-wide statistical analysis of dendritic arbors enabling the quantification of major morphological similarities and differences across broadly adopted metadata categories. Furthermore, we adopt a complementary unsupervised approach based on clustering and dimensionality reduction to identify the main morphological parameters leading to the most statistically informative structural classification. We find that specific combinations of measures related to branching density, overall size, tortuosity, bifurcation angles, arbor flatness, and topological asymmetry can capture anatomically and functionally relevant features of dendritic trees. The reported results only represent a small fraction of the relationships available for data exploration and hypothesis testing enabled by sharing of digital morphological reconstructions.

  7. Temperature changes in Poland from the 16th to the 20th centuries

    NASA Astrophysics Data System (ADS)

    Przybylak, Rajmund; Majorowicz, Jacek; Wójcik, Gabriel; Zielski, Andrzej; Choryczewski, Waldemar; Marciniak, Kazimierz; Nowosad, Wiesaw; Oliski, Piotr; Syta, Krzysztof

    2005-05-01

    A standardized tree-ring width chronology of the Scots pine (Pinus sylvestris L.) along with different types of documentary evidence (e.g. annals, chronicles, diaries, private correspondence, records of public administration, early newspapers) have been used to reconstruct air temperature in Poland. The ground surface temperature (GST) history has been reconstructed based on the continuous temperature logs from 13 wells, using a new method developed recently by Harris and Chapman (1998; Journal of Geophysical Research 103: 7371-7383) which is compared with the functional space inversion (FSI) method applied to all available Polish temperature-depth profiles analysed before.Response function calculations conducted for trees growing in Poland (except in mountainous regions) reveal a statistically significant correlation between the annual ring widths of the Scots pine and the monthly mean air temperatures, particularly from February and March, but also from January and April. Therefore, it was only possible to reconstruct the mean January-April air temperature.The following periods featured a warm late winter/early spring: 1530-90, 1656-70 (the warmest period), 1820-50, 1910-40, and after 1985. On the other hand, a cold January-April occurred in the following periods: 1600-50, 1760-75, 1800-15, 1880-1900, and 1950-80.Reconstructions of thermal conditions using documentary evidence were carried out for winter (December-February) and summer (June-August) from 1501 to 1840 and, therefore, their results cannot be directly compared with reconstructions based on tree-ring widths. Winter temperatures in this period were colder than air temperature in the 20th century. On the other hand, historical summers were generally warmer than those occurring in the 20th century. Such situations dominated in the 16th and 17th centuries, as well as at the turn of the 18th and 19th centuries. Throughout almost the entire period from 1501 to 1840, the thermal continentality of the climate in Poland was greater than in the 20th century.GST reconstructions show that its average pre-instrumental level (1500-1778) is about 0.9-1.5 °C lower than the mean air temperature for the period 1951-81. Lower amplitude of GST warming (0.9 +/- 0.1 °C) results from the individual and simultaneous inversions of well temperature data using the FSI method. A very good correspondence of the results has been found between series of annual mean GSTs from the FSI method and mean seasonal air temperatures reconstructed using documentary evidence.

  8. Effect of Medial Patellofemoral Ligament Reconstruction Method on Patellofemoral Contact Pressures and Kinematics.

    PubMed

    Stephen, Joanna M; Kittl, Christoph; Williams, Andy; Zaffagnini, Stefano; Marcheggiani Muccioli, Giulio Maria; Fink, Christian; Amis, Andrew A

    2016-05-01

    There remains a lack of evidence regarding the optimal method when reconstructing the medial patellofemoral ligament (MPFL) and whether some graft constructs can be more forgiving to surgical errors, such as overtensioning or tunnel malpositioning, than others. The null hypothesis was that there would not be a significant difference between reconstruction methods (eg, graft type and fixation) in the adverse biomechanical effects (eg, patellar maltracking or elevated articular contact pressure) resulting from surgical errors such as tunnel malpositioning or graft overtensioning. Controlled laboratory study. Nine fresh-frozen cadaveric knees were placed on a customized testing rig, where the femur was fixed but the tibia could be moved freely from 0° to 90° of flexion. Individual quadriceps heads and the iliotibial tract were separated and loaded to 205 N of tension using a weighted pulley system. Patellofemoral contact pressures and patellar tracking were measured at 0°, 10°, 20°, 30°, 60°, and 90° of flexion using pressure-sensitive film inserted between the patella and trochlea, in conjunction with an optical tracking system. The MPFL was transected and then reconstructed in a randomized order using a (1) double-strand gracilis tendon, (2) quadriceps tendon, and (3) tensor fasciae latae allograft. Pressure maps and tracking measurements were recorded for each reconstruction method in 2 N and 10 N of tension and with the graft positioned in the anatomic, proximal, and distal femoral tunnel positions. Statistical analysis was undertaken using repeated-measures analyses of variance, Bonferroni post hoc analyses, and paired t tests. Anatomically placed grafts during MPFL reconstruction tensioned to 2 N resulted in the restoration of intact medial joint contact pressures and patellar tracking for all 3 graft types investigated (P > .050). However, femoral tunnels positioned proximal or distal to the anatomic origin resulted in significant increases in the mean medial joint contact pressure, medial patellar tilt, and medial patellar translation during knee flexion or extension, respectively (P < .050), regardless of graft type, as did tensioning to 10 N. The importance of the surgical technique, specifically correct femoral tunnel positioning and graft tensioning, in restoring normal patellofemoral joint (PFJ) kinematics and articular cartilage contact stresses is evident, and the type of MPFL graft appeared less important. The correct femoral tunnel position and graft tension for restoring normal PFJ kinematics and articular cartilage contact stresses appear to be more important than graft selection during MPFL reconstruction. These findings emphasize the importance of the surgical technique when undertaking this procedure. © 2016 The Author(s).

  9. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    PubMed

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  10. Indirect Reconstruction of Pore Morphology for Parametric Computational Characterization of Unidirectional Porous Iron.

    PubMed

    Kovačič, Aljaž; Borovinšek, Matej; Vesenjak, Matej; Ren, Zoran

    2018-01-26

    This paper addresses the problem of reconstructing realistic, irregular pore geometries of lotus-type porous iron for computer models that allow for simple porosity and pore size variation in computational characterization of their mechanical properties. The presented methodology uses image-recognition algorithms for the statistical analysis of pore morphology in real material specimens, from which a unique fingerprint of pore morphology at a certain porosity level is derived. The representative morphology parameter is introduced and used for the indirect reconstruction of realistic and statistically representative pore morphologies, which can be used for the generation of computational models with an arbitrary porosity. Such models were subjected to parametric computer simulations to characterize the dependence of engineering elastic modulus on the porosity of lotus-type porous iron. The computational results are in excellent agreement with experimental observations, which confirms the suitability of the presented methodology of indirect pore geometry reconstruction for computational simulations of similar porous materials.

  11. A 2000-year European Mean Summer Temperature Reconstruction from the PAGES 2k Regional Network and Comparison to Millennium-Length Forced Model Simulations

    NASA Astrophysics Data System (ADS)

    Smerdon, J. E.; Büntgen, U.; Ljungqvist, F. C.; Esper, J.; Fernández-Donado, L.; Gonzalez-Rouco, F. J.; Luterbacher, J.; McCarroll, D.; Wagner, S.; Wahl, E. R.; Wanner, H.; Werner, J.; Zorita, E.

    2012-12-01

    A reconstruction of mean European summer (JJA) land temperatures from 138 B.C.E. to 2003 C.E. is presented and compared to 37 forced transient simulations of the last millennium from coupled General Circulation Models (CGCMs). Eleven annually resolved tree-ring and documentary records from ten European countries/regions were used for the reconstruction and compiled as part of the Euro_Med working group contribution to the PAGES 2k Regional Network. Records were selected based upon their summer temperature signal, annual resolution, and time-continuous sampling. All tree-ring data were detrended using the Regional Curve Standardization (RCS) method to retain low-frequency variance in the resulting mean chronologies. The calibration time series was the area-weighted JJA temperature computed from the CRUTEM4v dataset over a European land domain (35°-70°N, 10°W-40°E). A nested 'Composite-Plus-Scale' reconstruction was derived using nine nests reflecting the availability of predictors back in time. Each nest was calculated by standardizing the available predictor series over the calibration interval, and subsequently calculating a weighted composite in which each proxy was multiplied by its correlation with the target index. The CPS methodology was implemented using a resampling scheme that uses 104 years for calibration. The initial calibration period extended from 1850-1953 C.E. and was incremented by one year until reaching the final period of 1900-2003 C.E., yielding a total of 51 reconstructions for each nest. Within each calibration step, the 50 years excluded from calibration were used for validation. Validation statistics across all reconstruction ensemble members within each nest indicate skillful reconstructions (RE: 0.42-0.64; CE: 0.26-0.54) and are all above the maximum validation statistics achieved in an ensemble of red noise benchmarking experiments. Warm periods in the derived reconstruction during the 1st, 2nd, and 7th-12th centuries compare to similar warm summer temperatures during the mid 20th century, although the 2003 summer remains the warmest single summer over the duration of the reconstruction. A relative period of cold summer temperatures is also noted from the 14th-19th centuries, consistent with the expected timing of the Little Ice Age. The nested CPS reconstruction is also compared to a 37-member ensemble of millennium-length forced transient simulations from CGCMs, including eleven simulations from the collection of CMIP5/PMIP3 last-millennium experiments. The simulations are separated based on their use of strong or weak scaling of total solar irradiance (TSI) forcing over the last millennium. Although both ensembles of simulated mean European temperatures compare well with the nested CPS reconstruction, there is some evidence that there is better agreement with the ensemble using strong TSI as forcing.

  12. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.

  13. Summary Statistics for Fun Dough Data Acquired at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Morales, K E; Whipple, R E

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less

  14. Nonlinear dynamic analysis of voices before and after surgical excision of vocal polyps

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; McGilligan, Clancy; Zhou, Liang; Vig, Mark; Jiang, Jack J.

    2004-05-01

    Phase space reconstruction, correlation dimension, and second-order entropy, methods from nonlinear dynamics, are used to analyze sustained vowels generated by patients before and after surgical excision of vocal polyps. Two conventional acoustic perturbation parameters, jitter and shimmer, are also employed to analyze voices before and after surgery. Presurgical and postsurgical analyses of jitter, shimmer, correlation dimension, and second-order entropy are statistically compared. Correlation dimension and second-order entropy show a statistically significant decrease after surgery, indicating reduced complexity and higher predictability of postsurgical voice dynamics. There is not a significant postsurgical difference in shimmer, although jitter shows a significant postsurgical decrease. The results suggest that jitter and shimmer should be applied to analyze disordered voices with caution; however, nonlinear dynamic methods may be useful for analyzing abnormal vocal function and quantitatively evaluating the effects of surgical excision of vocal polyps.

  15. Aseptic Freeze-Dried versus Sterile Wet-Packaged Human Cadaveric Acellular Dermal Matrix in Immediate Tissue Expander Breast Reconstruction: A Propensity Score Analysis.

    PubMed

    Hanson, Summer E; Meaike, Jesse D; Selber, Jesse C; Liu, Jun; Li, Liang; Hassid, Victor J; Baumann, Donald P; Butler, Charles E; Garvey, Patrick B

    2018-05-01

    Although multiple acellular dermal matrix sources exist, it is unclear how its processing impacts complication rates. The authors compared complications between two preparations of human cadaveric acellular dermal matrix (freeze dried and ready-to-use) in immediate tissue expander breast reconstruction to analyze the effect of processing on complications. The authors retrospectively reviewed all alloplastic breast reconstructions with freeze-dried or ready-to-use human acellular dermal matrices between 2006 and 2016. The primary outcome measure was surgical-site occurrence defined as seroma, skin dehiscence, surgical-site infection, or reconstruction failure. The two groups were compared before and after propensity score matching. The authors included 988 reconstructions (freeze-dried, 53.8 percent; ready-to-use, 46.2 percent). Analysis of 384 propensity score-matched pairs demonstrated a slightly higher rate of surgical-site occurrence (21.4 percent versus 16.7 percent; p = 0.10) and surgical-site infection (9.6 percent versus 7.8 percent; p = 0.13) in the freeze-dried group than in the ready-to-use group, but the difference was not significant. However, failure was significantly higher for the freeze-dried versus ready-to-use group (7.8 percent versus 4.4 percent; p = 0.050). This is the largest study comparing the outcomes of alloplastic breast reconstruction using human acellular dermal matrix materials prepared by different methods. The authors demonstrated higher early complications with aseptic, freeze-dried matrix than with sterile ready-to-use matrix; reconstructive failure was the only outcome to achieve statistical significance. The authors conclude that acellular dermal matrix preparation has an independent impact on patient outcomes in their comparison of one company's product. Therapeutic, III.

  16. Computer-assisted versus conventional free fibula flap technique for craniofacial reconstruction: an outcomes comparison.

    PubMed

    Seruya, Mitchel; Fisher, Mark; Rodriguez, Eduardo D

    2013-11-01

    There has been rising interest in computer-aided design/computer-aided manufacturing for preoperative planning and execution of osseous free flap reconstruction. The purpose of this study was to compare outcomes between computer-assisted and conventional fibula free flap techniques for craniofacial reconstruction. A two-center, retrospective review was carried out on patients who underwent fibula free flap surgery for craniofacial reconstruction from 2003 to 2012. Patients were categorized by the type of reconstructive technique: conventional (between 2003 and 2009) or computer-aided design/computer-aided manufacturing (from 2010 to 2012). Demographics, surgical factors, and perioperative and long-term outcomes were compared. A total of 68 patients underwent microsurgical craniofacial reconstruction: 58 conventional and 10 computer-aided design and manufacturing fibula free flaps. By demographics, patients undergoing the computer-aided design/computer-aided manufacturing method were significantly older and had a higher rate of radiotherapy exposure compared with conventional patients. Intraoperatively, the median number of osteotomies was significantly higher (2.0 versus 1.0, p=0.002) and the median ischemia time was significantly shorter (120 minutes versus 170 minutes, p=0.004) for the computer-aided design/computer-aided manufacturing technique compared with conventional techniques; operative times were shorter for patients undergoing the computer-aided design/computer-aided manufacturing technique, although this did not reach statistical significance. Perioperative and long-term outcomes were equivalent for the two groups, notably, hospital length of stay, recipient-site infection, partial and total flap loss, and rate of soft-tissue and bony tissue revisions. Microsurgical craniofacial reconstruction using a computer-assisted fibula flap technique yielded significantly shorter ischemia times amidst a higher number of osteotomies compared with conventional techniques. Therapeutic, III.

  17. Diagnostic Accuracy of CT Enterography for Active Inflammatory Terminal Ileal Crohn Disease: Comparison of Full-Dose and Half-Dose Images Reconstructed with FBP and Half-Dose Images with SAFIRE.

    PubMed

    Gandhi, Namita S; Baker, Mark E; Goenka, Ajit H; Bullen, Jennifer A; Obuchowski, Nancy A; Remer, Erick M; Coppa, Christopher P; Einstein, David; Feldman, Myra K; Kanmaniraja, Devaraju; Purysko, Andrei S; Vahdat, Noushin; Primak, Andrew N; Karim, Wadih; Herts, Brian R

    2016-08-01

    Purpose To compare the diagnostic accuracy and image quality of computed tomographic (CT) enterographic images obtained at half dose and reconstructed with filtered back projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) with those of full-dose CT enterographic images reconstructed with FBP for active inflammatory terminal or neoterminal ileal Crohn disease. Materials and Methods This retrospective study was compliant with HIPAA and approved by the institutional review board. The requirement to obtain informed consent was waived. Ninety subjects (45 with active terminal ileal Crohn disease and 45 without Crohn disease) underwent CT enterography with a dual-source CT unit. The reference standard for confirmation of active Crohn disease was active terminal ileal Crohn disease based on ileocolonoscopy or established Crohn disease and imaging features of active terminal ileal Crohn disease. Data from both tubes were reconstructed with FBP (100% exposure); data from the primary tube (50% exposure) were reconstructed with FBP and SAFIRE strengths 3 and 4, yielding four datasets per CT enterographic examination. The mean volume CT dose index (CTDIvol) and size-specific dose estimate (SSDE) at full dose were 13.1 mGy (median, 7.36 mGy) and 15.9 mGy (median, 13.06 mGy), respectively, and those at half dose were 6.55 mGy (median, 3.68 mGy) and 7.95 mGy (median, 6.5 mGy). Images were subjectively evaluated by eight radiologists for quality and diagnostic confidence for Crohn disease. Areas under the receiver operating characteristic curves (AUCs) were estimated, and the multireader, multicase analysis of variance method was used to compare reconstruction methods on the basis of a noninferiority margin of 0.05. Results The mean AUCs with half-dose scans (FBP, 0.908; SAFIRE 3, 0.935; SAFIRE 4, 0.924) were noninferior to the mean AUC with full-dose FBP scans (0.908; P < .003). The proportion of images with inferior quality was significantly higher with all half-dose reconstructions than with full-dose FBP (mean proportion: 0.117 for half-dose FBP, 0.054 for half-dose SAFIRE 3, 0.054 for half-dose SAFIRE 4, and 0.017 for full-dose FBP; P < .001). Conclusion The diagnostic accuracy of half-dose CT enterography with FBP and SAFIRE is statistically noninferior to that of full-dose CT enterography for active inflammatory terminal ileal Crohn disease, despite an inferior subjective image quality. (©) RSNA, 2016 Online supplemental material is available for this article.

  18. A study of the effects of strong magnetic fields on the image resolution of PET scanners

    NASA Astrophysics Data System (ADS)

    Burdette, Don J.

    Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. In such systems using detectors with sub-millimeter intrinsic resolutions, the range of the positron is the largest contribution to the image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as 68Ga and 94mTc, the variation of the annihilation point dominates the spatial resolution. In this study two techniques are investigated to improve the image resolution of PET scanners limited by the range of the positron. One, the positron range can be reduced by embedding the PET field of view in a strong magnetic field. We have developed a silicon pad detector based PET instrument that can operate in strong magnetic fields with an image resolution of 0.7 mm FWHM to study this effect. Two, iterative reconstruction methods can be used to statistically correct for the range of the positron. Both strong magnetic fields and iterative reconstruction algorithms that statistically account for the positron range distribution are investigated in this work.

  19. Revealing physical interaction networks from statistics of collective dynamics

    PubMed Central

    Nitzan, Mor; Casadiego, Jose; Timme, Marc

    2017-01-01

    Revealing physical interactions in complex systems from observed collective dynamics constitutes a fundamental inverse problem in science. Current reconstruction methods require access to a system’s model or dynamical data at a level of detail often not available. We exploit changes in invariant measures, in particular distributions of sampled states of the system in response to driving signals, and use compressed sensing to reveal physical interaction networks. Dynamical observations following driving suffice to infer physical connectivity even if they are temporally disordered, are acquired at large sampling intervals, and stem from different experiments. Testing various nonlinear dynamic processes emerging on artificial and real network topologies indicates high reconstruction quality for existence as well as type of interactions. These results advance our ability to reveal physical interaction networks in complex synthetic and natural systems. PMID:28246630

  20. Molecular Phylogenetics: Concepts for a Newcomer.

    PubMed

    Ajawatanawong, Pravech

    Molecular phylogenetics is the study of evolutionary relationships among organisms using molecular sequence data. The aim of this review is to introduce the important terminology and general concepts of tree reconstruction to biologists who lack a strong background in the field of molecular evolution. Some modern phylogenetic programs are easy to use because of their user-friendly interfaces, but understanding the phylogenetic algorithms and substitution models, which are based on advanced statistics, is still important for the analysis and interpretation without a guide. Briefly, there are five general steps in carrying out a phylogenetic analysis: (1) sequence data preparation, (2) sequence alignment, (3) choosing a phylogenetic reconstruction method, (4) identification of the best tree, and (5) evaluating the tree. Concepts in this review enable biologists to grasp the basic ideas behind phylogenetic analysis and also help provide a sound basis for discussions with expert phylogeneticists.

  1. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  2. Limited-angle effect compensation for respiratory binned cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Wenyuan; Yang, Yongyi, E-mail: yy@ece.iit.edu; Wernick, Miles N.

    Purpose: In cardiac single photon emission computed tomography (SPECT), respiratory-binned study is used to combat the motion blur associated with respiratory motion. However, owing to the variability in respiratory patterns during data acquisition, the acquired data counts can vary significantly both among respiratory bins and among projection angles within individual bins. If not properly accounted for, such variation could lead to artifacts similar to limited-angle effect in image reconstruction. In this work, the authors aim to investigate several reconstruction strategies for compensating the limited-angle effect in respiratory binned data for the purpose of reducing the image artifacts. Methods: The authorsmore » first consider a model based correction approach, in which the variation in acquisition time is directly incorporated into the imaging model, such that the data statistics are accurately described among both the projection angles and respiratory bins. Afterward, the authors consider an approximation approach, in which the acquired data are rescaled to accommodate the variation in acquisition time among different projection angles while the imaging model is kept unchanged. In addition, the authors also consider the use of a smoothing prior in reconstruction for suppressing the artifacts associated with limited-angle effect. In our evaluation study, the authors first used Monte Carlo simulated imaging with 4D NCAT phantom wherein the ground truth is known for quantitative comparison. The authors evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, uniformity and spatial resolution of the left ventricle (LV) wall, and detectability of perfusion defect using a channelized Hotelling observer. As a preliminary demonstration, the authors also tested the different approaches on five sets of clinical acquisitions. Results: The quantitative evaluation results show that the three compensation methods could all, but to different extents, reduce the reconstruction artifacts over no compensation. In particular, the model based approach reduced the mean-squared-error of the reconstructed myocardium by as much as 40%. Compared to the approach of data rescaling, the model based approach further improved both the overall and regional accuracy of the myocardium; it also further improved the lesion detectability and the uniformity of the LV wall. When ML reconstruction was used, the model based approach was notably more effective for improving the LV wall; when MAP reconstruction was used, the smoothing prior could reduce the noise level and artifacts with little or no increase in bias, but at the cost of a slight resolution loss of the LV wall. The improvements in image quality by the different compensation methods were also observed in the clinical acquisitions. Conclusions: Compensating for the uneven distribution of acquisition time among both projection angles and respiratory bins can effectively reduce the limited-angle artifacts in respiratory-binned cardiac SPECT reconstruction. Direct incorporation of the time variation into the imaging model together with a smoothing prior in reconstruction can lead to the most improvement in the accuracy of the reconstructed myocardium.« less

  3. Failure Rate and Clinical Outcomes of Anterior Cruciate Ligament Reconstruction Using Autograft Hamstring Versus a Hybrid Graft.

    PubMed

    Leo, Brian M; Krill, Michael; Barksdale, Leticia; Alvarez-Pinzon, Andres M

    2016-11-01

    To compare the revision rate and subjective outcome measures of autograft hamstring versus a soft tissue hybrid graft combining both autograft hamstring and tibialis allograft for isolated anterior cruciate ligament (ACL) reconstruction. A single-center retrospective, nonrandomized, comparative study of isolated ACL reconstruction revision rates for subjects who underwent arthroscopic reconstruction of the ACL using autograft hamstring or a soft tissue hybrid graft using both autograft hamstring and tibialis allograft was performed. Patients with isolated ACL tears were included and underwent anatomic single-bundle reconstruction using an independent tunnel drilling technique and a minimum of 24 months' follow-up. The primary outcome assessed was the presence or absence of ACL rerupture. Secondary clinical outcomes consisted of the International Knee Documentation Committee, University of California at Los Angeles (UCLA) ACL quality of life assessment, and the visual analog pain scale. Between February 2010 and April 2013, 95 patients with isolated ACL tears between ages 18 and 40 met the inclusion criteria and were enrolled. Seventy-one autograft hamstring and 24 soft tissue hybrid graft ACL reconstructions were performed during the course of this study. The follow-up period was 24 to 32 months (mean 26.9 months). There were no statistically significant differences in patient demographics or Outerbridge classification. No statistically significant differences in ACL retears (5.6% auto, 4.2% hybrid; P = .57) were found between groups. Clinical International Knee Documentation Committee and UCLA ACL quality of life assessment improvement scores revealed no statistically significant differences in autograft and hybrid graft reconstructions (41 ± 11, 43 ± 13; P = .65) (38 ± 11, 40 ± 10; P = .23). The mean pain level decreased from 8.1 to 2.8 in the autograft group and 7.9 to 2.5 in the hybrid group (P = .18). The use of a hybrid soft tissue graft has a comparable rerupture rate and clinical outcome to ACL reconstruction using autograft hamstring. Level III, retrospective comparative study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  4. Compressing random microstructures via stochastic Wang tilings.

    PubMed

    Novák, Jan; Kučerová, Anna; Zeman, Jan

    2012-10-01

    This Rapid Communication presents a stochastic Wang tiling-based technique to compress or reconstruct disordered microstructures on the basis of given spatial statistics. Unlike the existing approaches based on a single unit cell, it utilizes a finite set of tiles assembled by a stochastic tiling algorithm, thereby allowing to accurately reproduce long-range orientation orders in a computationally efficient manner. Although the basic features of the method are demonstrated for a two-dimensional particulate suspension, the present framework is fully extensible to generic multidimensional media.

  5. Breast Reinnervation: DIEP Neurotization Using the Third Anterior Intercostal Nerve

    PubMed Central

    Menn, Zachary K.; Eldor, Liron; Kaufman, Yoav; Dellon, A. Lee

    2013-01-01

    Background: The purpose of this article is to evaluate a new method of DIEP flap neurotization using a reliably located recipient nerve. We hypothesize that neurotization by this method (with either nerve conduit or direct nerve coaptation) will have a positive effect on sensory recovery. Methods: Fifty-seven deep inferior epigastric perforator (DIEP) flaps were performed on 35 patients. Neurotizations were performed to the third anterior intercostal nerve by directly coapting the flap donor nerve or coapting with a nerve conduit. Nine nonneurotized DIEP flaps served as controls and received no attempted neurotization. All patients were tested for breast sensibility in 9 areas of the flap skin-island and adjacent postmastectomy skin. Testing occurred at an average of 111 weeks (23–309) postoperatively. Results: At a mean of 111 weeks after breast reconstruction, neurotization of the DIEP flap resulted in recovery of sensibility that was statistically significantly better (lower threshold) in the flap skin (P < 0.01) and statistically significantly better than in the native mastectomy skin into which the DIEP flap was inserted (P < 0.01). Sensibility recovered in DIEP flaps neurotized using the nerve conduit was significantly better (lower threshold) than that in the corresponding areas of the DIEP flaps neurotized by direct coaptation (P < 0.01). Conclusion: DIEP flap neurotization using the third anterior intercostal nerve is an effective technique to provide a significant increase in sensory recovery for breast reconstruction patients, while adding minimal surgical time. Additionally, the use of a nerve conduit produces increased sensory recovery when compared direct coaptation. PMID:25289267

  6. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source

    NASA Astrophysics Data System (ADS)

    Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.

    2017-11-01

    Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.

  7. A monthly global paleo-reanalysis of the atmosphere from 1600 to 2005 for studying past climatic variations

    PubMed Central

    Franke, Jörg; Brönnimann, Stefan; Bhend, Jonas; Brugnara, Yuri

    2017-01-01

    Climatic variations at decadal scales such as phases of accelerated warming or weak monsoons have profound effects on society and economy. Studying these variations requires insights from the past. However, most current reconstructions provide either time series or fields of regional surface climate, which limit our understanding of the underlying dynamics. Here, we present the first monthly paleo-reanalysis covering the period 1600 to 2005. Over land, instrumental temperature and surface pressure observations, temperature indices derived from historical documents and climate sensitive tree-ring measurements were assimilated into an atmospheric general circulation model ensemble using a Kalman filtering technique. This data set combines the advantage of traditional reconstruction methods of being as close as possible to observations with the advantage of climate models of being physically consistent and having 3-dimensional information about the state of the atmosphere for various variables and at all points in time. In contrast to most statistical reconstructions, centennial variability stems from the climate model and its forcings, no stationarity assumptions are made and error estimates are provided. PMID:28585926

  8. Inference of Spatio-Temporal Functions Over Graphs via Multikernel Kriged Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Ioannidis, Vassilis N.; Romero, Daniel; Giannakis, Georgios B.

    2018-06-01

    Inference of space-time varying signals on graphs emerges naturally in a plethora of network science related applications. A frequently encountered challenge pertains to reconstructing such dynamic processes, given their values over a subset of vertices and time instants. The present paper develops a graph-aware kernel-based kriged Kalman filter that accounts for the spatio-temporal variations, and offers efficient online reconstruction, even for dynamically evolving network topologies. The kernel-based learning framework bypasses the need for statistical information by capitalizing on the smoothness that graph signals exhibit with respect to the underlying graph. To address the challenge of selecting the appropriate kernel, the proposed filter is combined with a multi-kernel selection module. Such a data-driven method selects a kernel attuned to the signal dynamics on-the-fly within the linear span of a pre-selected dictionary. The novel multi-kernel learning algorithm exploits the eigenstructure of Laplacian kernel matrices to reduce computational complexity. Numerical tests with synthetic and real data demonstrate the superior reconstruction performance of the novel approach relative to state-of-the-art alternatives.

  9. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    NASA Astrophysics Data System (ADS)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  10. Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas

    2017-04-01

    Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.

  11. Synchrony dynamics underlying effective connectivity reconstruction of neuronal circuits

    NASA Astrophysics Data System (ADS)

    Yu, Haitao; Guo, Xinmeng; Qin, Qing; Deng, Yun; Wang, Jiang; Liu, Jing; Cao, Yibin

    2017-04-01

    Reconstruction of effective connectivity between neurons is essential for neural systems with function-related significance, characterizing directionally causal influences among neurons. In this work, causal interactions between neurons in spinal dorsal root ganglion, activated by manual acupuncture at Zusanli acupoint of experimental rats, are estimated using Granger causality (GC) method. Different patterns of effective connectivity are obtained for different frequencies and types of acupuncture. Combined with synchrony analysis between neurons, we show a dependence of effective connection on the synchronization dynamics. Based on the experimental findings, a neuronal circuit model with synaptic connections is constructed. The variation of neuronal effective connectivity with respect to its structural connectivity and synchronization dynamics is further explored. Simulation results show that reciprocally causal interactions with statistically significant are formed between well-synchronized neurons. The effective connectivity may be not necessarily equivalent to synaptic connections, but rather depend on the synchrony relationship. Furthermore, transitions of effective interaction between neurons are observed following the synchronization transitions induced by conduction delay and synaptic conductance. These findings are helpful to further investigate the dynamical mechanisms underlying the reconstruction of effective connectivity of neuronal population.

  12. Large-angle correlations in the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Efstathiou, George; Ma, Yin-Zhe; Hanson, Duncan

    2010-10-01

    It has been argued recently by Copi et al. 2009 that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, inflationary Lambda cold dark matter (ΛCDM) cosmology. We compare various estimators of the temperature correlation function showing how they depend on assumptions of statistical isotropy and how they perform on the Wilkinson Microwave Anisotropy Probe (WMAP) 5-yr Internal Linear Combination (ILC) maps with and without a sky cut. We show that the low multipole harmonics that determine the large-scale features of the temperature correlation function can be reconstructed accurately from the data that lie outside the sky cuts. The reconstructions are only weakly dependent on the assumed statistical properties of the temperature field. The temperature correlation functions computed from these reconstructions are in good agreement with those computed from the ILC map over the whole sky. We conclude that the large-scale angular correlation function for our realization of the sky is well determined. A Bayesian analysis of the large-scale correlations is presented, which shows that the data cannot exclude the standard ΛCDM model. We discuss the differences between our results and those of Copi et al. Either there exists a violation of statistical isotropy as claimed by Copi et al., or these authors have overestimated the significance of the discrepancy because of a posteriori choices of estimator, statistic and sky cut.

  13. SR-XFA as a method of choice in the search of signals of changing palaeoclimates in the sediments of Lake Baikal, compared to INAA and ICP-MS

    NASA Astrophysics Data System (ADS)

    Phedorin, M. A.; Bobrov, V. A.; Goldberg, E. L.; Navez, J.; Zolotaryov, K. V.; Grachev, M. A.

    2000-06-01

    Sediments of Lake Baikal obtained on top of the underwater Akademichesky Ridge for reconstruction of the palaeoclimates of Holocene and Upper Pleistocene were subjected to elemental analysis with three methods: (i) synchrotron radiation X-ray fluorescent analysis (SR-XFA); (ii) instrumental neutron activation analysis (INAA); (iii) induction-coupled plasma mass-spectrometry (ICP-MS). Comparison of the results obtained is accompanied by statistical tests and shows that, due to its high sensitivity, simplicity, and non-destructive nature, SR-XFA can be recommended as a method of choice in the search of geochemical signals of changing palaeoclimates.

  14. [Rapid prototyping in planning reconstructive surgery of the head and neck. Review and evaluation of indications in clinical use].

    PubMed

    Bill, J S; Reuther, J F

    2004-05-01

    The aim was to define the indications for use of rapid prototyping models based on data of patients treated with this technique. Since 1987 our department has been developing methods of rapid prototyping in surgery planning. During the study, first the statistical and reproducible anatomical precision of rapid prototyping models was determined on pig skull measurements depending on CT parameters and method of rapid prototyping. Measurements on stereolithography models and on selective laser sintered models confirmed an accuracy of +/-0.88 mm or 2.7% (maximum deviation: -3.0 mm to +3.2 mm) independently from CT parameters or method of rapid prototyping, respectively. With the same precision of models multilayer helical CT with a higher rate is the preferable method of data acquisition compared to conventional helical CT. From 1990 to 2002 in atotal of 122 patients, 127 rapid prototyping models were manufactured: in 112 patients stereolithography models, in 2 patients an additional stereolithography model, in 2 patients an additional selective laser sinter model, in 1 patient an additional milled model, and in 10 patients just a selective laser sinter model. Reconstructive surgery, distraction osteogenesis including midface distraction, and dental implantology are proven to be the major indications for rapid prototyping as confirmed in a review of the literature. Surgery planning on rapid prototyping models should only be used in individual cases due to radiation dose and high costs. Routine use of this technique only seems to be indicated in skull reconstruction and distraction osteogenesis.

  15. Paleotemperature reconstruction from mammalian phosphate δ18O records - an alternative view on data processing

    NASA Astrophysics Data System (ADS)

    Skrzypek, Grzegorz; Sadler, Rohan; Wiśniewski, Andrzej

    2017-04-01

    The stable oxygen isotope composition of phosphates (δ18O) extracted from mammalian bone and teeth material is commonly used as a proxy for paleotemperature. Historically, several different analytical and statistical procedures for determining air paleotemperatures from the measured δ18O of phosphates have been applied. This inconsistency in both stable isotope data processing and the application of statistical procedures has led to large and unwanted differences between calculated results. This study presents the uncertainty associated with two of the most commonly used regression methods: least squares inverted fit and transposed fit. We assessed the performance of these methods by designing and applying calculation experiments to multiple real-life data sets, calculating in reverse temperatures, and comparing them with true recorded values. Our calculations clearly show that the mean absolute errors are always substantially higher for the inverted fit (a causal model), with the transposed fit (a predictive model) returning mean values closer to the measured values (Skrzypek et al. 2015). The predictive models always performed better than causal models, with 12-65% lower mean absolute errors. Moreover, the least-squares regression (LSM) model is more appropriate than Reduced Major Axis (RMA) regression for calculating the environmental water stable oxygen isotope composition from phosphate signatures, as well as for calculating air temperature from the δ18O value of environmental water. The transposed fit introduces a lower overall error than the inverted fit for both the δ18O of environmental water and Tair calculations; therefore, the predictive models are more statistically efficient than the causal models in this instance. The direct comparison of paleotemperature results from different laboratories and studies may only be achieved if a single method of calculation is applied. Reference Skrzypek G., Sadler R., Wiśniewski A., 2016. Reassessment of recommendations for processing mammal phosphate δ18O data for paleotemperature reconstruction. Palaeogeography, Palaeoclimatology, Palaeoecology 446, 162-167.

  16. Spectral CT with metal artifacts reduction software for improvement of tumor visibility in the vicinity of gold fiducial markers.

    PubMed

    Brook, Olga R; Gourtsoyianni, Sofia; Brook, Alexander; Mahadevan, Anand; Wilcox, Carol; Raptopoulos, Vassilios

    2012-06-01

    To evaluate spectral computed tomography (CT) with metal artifacts reduction software (MARS) for reduction of metal artifacts associated with gold fiducial seeds. Thirteen consecutive patients with 37 fiducial seeds implanted for radiation therapy of abdominal lesions were included in this HIPAA-compliant, institutional review board-approved prospective study. Six patients were women (46%) and seven were men (54%). The mean age was 61.1 years (median, 58 years; range, 29-78 years). Spectral imaging was used for arterial phase CT. Images were reconstructed with and without MARS in axial, coronal, and sagittal planes. Two radiologists independently reviewed reconstructions and selected the best image, graded the visibility of the tumor, and assessed the amount of artifacts in all planes. A linear-weighted κ statistic and Wilcoxon signed-rank test were used to assess interobserver variability. Histogram analysis with the Kolmogorov-Smirnov test was used for objective evaluation of artifacts reduction. Fiducial seeds were placed in pancreas (n = 5), liver (n = 7), periportal lymph nodes (n = 1), and gallbladder bed (n = 1). MARS-reconstructed images received a better grade than those with standard reconstruction in 60% and 65% of patients by the first and second radiologist, respectively. Tumor visibility was graded higher with standard versus MARS reconstruction (grade, 3.7 ± 1.0 vs 2.8 ± 1.1; P = .001). Reduction of blooming was noted on MARS-reconstructed images (P = .01). Amount of artifacts, for both any and near field, was significantly smaller on sagittal and coronal MARS-reconstructed images than on standard reconstructions (P < .001 for all comparisons). Far-field artifacts were more prominent on axial MARS-reconstructed images than on standard reconstructions (P < .01). Linear-weighted κ statistic showed moderate to perfect agreement between radiologists. CT number distribution was narrower with MARS than with standard reconstruction in 35 of 37 patients (P < .001). Spectral CT with use of MARS improved tumor visibility in the vicinity of gold fiducial seeds.

  17. A comparison of manual neuronal reconstruction from biocytin histology or 2-photon imaging: morphometry and computer modeling

    PubMed Central

    Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper

    2014-01-01

    Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost. PMID:25071470

  18. Verifying Three-Dimensional Skull Model Reconstruction Using Cranial Index of Symmetry

    PubMed Central

    Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi

    2013-01-01

    Background Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). Materials and methods From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. Results CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47–99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. Conclusions CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation. PMID:24204566

  19. Assessing Women’s Preferences and Preference Modeling for Breast Reconstruction Decision Making

    PubMed Central

    Sun, Clement S.; Cantor, Scott B.; Reece, Gregory P.; Crosby, Melissa A.; Fingeret, Michelle C.

    2014-01-01

    Background: Women considering breast reconstruction must make challenging trade-offs among issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. Methods: In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using 9 different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual well-being as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants’ rankings. Results: The median amount of time required to assess preferences was 34 minutes. Agreement among the 9 preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best-performing risk-averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the 7 attributes. Conclusions: We recommend the risk-averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study. PMID:25105083

  20. Likelihood reconstruction method of real-space density and velocity power spectra from a redshift galaxy survey

    NASA Astrophysics Data System (ADS)

    Tang, Jiayu; Kayo, Issha; Takada, Masahiro

    2011-09-01

    We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.

  1. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  2. Comparison of Dorsal Intercostal Artery Perforator Propeller Flaps and Bilateral Rotation Flaps in Reconstruction of Myelomeningocele Defects.

    PubMed

    Tenekeci, Goktekin; Basterzi, Yavuz; Unal, Sakir; Sari, Alper; Demir, Yavuz; Bagdatoglu, Celal; Tasdelen, Bahar

    2018-04-09

    Bilateral rotation flaps are considered the workhorse flaps in reconstruction of myelomeningocele defects. Since the introduction of perforator flaps in the field of reconstructive surgery, perforator flaps have been used increasingly in the reconstruction of various soft tissue defects all over the body because of their appreciated advantages. The aim of this study was to compare the complications and surgical outcomes between bilateral rotation flaps and dorsal intercostal artery perforator (DICAP) flaps in the soft tissue reconstruction of myelomeningocele defects. Between January 2005-February 2017, we studied 47 patients who underwent reconstruction of myelomeningocele defects. Patient demographics, operative data, and postoperative data were reviewed retrospectively and are included in the study. We found no statistically significant differences in patient demographics and surgical complications between these two groups; this may be due to small sample size. With regard to complications-partial flap necrosis, cerebrospinal fluid (CSF) leakage, necessity for reoperation, and wound infection-DICAP propeller flaps were clinically superior to rotation flaps. Partial flap necrosis was associated with CSF leakage and wound infection, and CSF leakage was associated with wound dehiscence. Although surgical outcomes obtained with DICAP propeller flaps were clinically superior to those obtained with rotation flaps, there was no statistically significant difference between the two patient groups. A well-designed comparative study with adequate sample size is needed. Nonetheless, we suggest using DICAP propeller flaps for reconstruction of large myelomeningocele defects.

  3. Can conclusions drawn from phantom-based image noise assessments be generalized to in vivo studies for the nonlinear model-based iterative reconstruction method?

    PubMed Central

    Gomez-Cardona, Daniel; Li, Ke; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2016-01-01

    Purpose: Phantom-based objective image quality assessment methods are widely used in the medical physics community. For a filtered backprojection (FBP) reconstruction-based linear or quasilinear imaging system, the use of this methodology is well justified. Many key image quality metrics acquired with phantom studies can be directly applied to in vivo human subject studies. Recently, a variety of image quality metrics have been investigated for model-based iterative image reconstruction (MBIR) methods and several novel characteristics have been discovered in phantom studies. However, the following question remains unanswered: can certain results obtained from phantom studies be generalized to in vivo animal studies and human subject studies? The purpose of this paper is to address this question. Methods: One of the most striking results obtained from phantom studies is a novel power-law relationship between noise variance of MBIR (σ2) and tube current-rotation time product (mAs): σ2 ∝ (mAs)−0.4 [K. Li et al., “Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance,” Med. Phys. 41, 041906 (15pp.) (2014)]. To examine whether the same power-law works for in vivo cases, experimental data from two types of in vivo studies were analyzed in this paper. All scans were performed with a 64-slice diagnostic CT scanner (Discovery CT750 HD, GE Healthcare) and reconstructed with both FBP and a MBIR method (Veo, GE Healthcare). An Institutional Animal Care and Use Committee-approved in vivo animal study was performed with an adult swine at six mAs levels (10–290). Additionally, human subject data (a total of 110 subjects) acquired from an IRB-approved clinical trial were analyzed. In this clinical trial, a reduced-mAs scan was performed immediately following the standard mAs scan; the specific mAs used for the two scans varied across human subjects and were determined based on patient size and clinical indications. The measurements of σ2 were performed at different mAs by drawing regions-of-interest (ROIs) in the liver and the subcutaneous fat. By applying a linear least-squares regression, the β values in the power-law relationship σ2 ∝ (mAs)−β were measured for the in vivo data and compared with the value found in phantom experiments. Results: For the in vivo swine study, an exponent of β = 0.43 was found for MBIR, and the coefficient of determination (R2) for the corresponding least-squares power-law regression was 0.971. As a reference, the β and R2 values for FBP were found to be 0.98 and 0.997, respectively, from the same study, which are consistent with the well-known σ2 ∝ (mAs)−1.0 relationship for linear CT systems. For the human subject study, the measured β values for the MBIR images were 0.41 ± 0.12 in the liver and 0.37 ± 0.12 in subcutaneous fat. In comparison, the β values for the FBP images were 1.04 ± 0.10 in the liver and 0.97 ± 0.12 in subcutaneous fat. The β values of MBIR and FBP obtained from the in vivo studies were found to be statistically equivalent to the corresponding β values from the phantom study within an equivalency interval of [ − 0.1, 0.1] (p < 0.05); across MBIR and FBP, the difference in β was statistically significant (p < 0.05). Conclusions: Despite the nonlinear nature of the MBIR method, the power-law relationship, σ2 ∝ (mAs)−0.4, found from phantom studies can be applied to in vivo animal and human subject studies. PMID:26843232

  4. Bayesian Abel Inversion in Quantitative X-Ray Radiography

    DOE PAGES

    Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...

    2016-05-19

    A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less

  5. Functional results from reconstruction of the anterior cruciate ligament using the central third of the patellar ligament and flexor tendons☆

    PubMed Central

    de Souza Leao, Marcos George; Pampolha, Abelardo Gautama Moreira; Orlando Junior, Nilton

    2015-01-01

    Objectives To evaluate knee function in patients undergoing reconstruction of the anterior cruciate ligament (ACL) using the central third of the patellar ligament or the medial flexor tendons of the knee, i.e. quadruple ligaments from the semitendinosus and gracilis (ST-G), by means of the Knee Society Score (KSS) and the Lysholm scale. Methods This was a randomized prospective longitudinal study on 40 patients who underwent arthroscopic ACL reconstruction between September 2013 and August 2014. They comprised 37 males and three females, with ages ranging from 16 to 52 years. The patients were numbered randomly from 1 to 40: the even numbers underwent surgical correction using the ST-G tendons and the odd numbers, using the patellar tendon. Functional evaluations were made using the KSS and Lysholm scale, applied in the evening before the surgical procedure and six months after the operation. Results From the statistical analysis, it could be seen that the patients’ functional capacity was significantly greater after the operation than before the operation. There was strong evidence that the two forms of therapy had similar results (p = >0.05), in all the comparisons. Conclusions The results from the ACL reconstructions were similar with regard to functional recovery of the knee and improvement of quality of life, independent of the type of graft. It was not possible to identify the best method of surgical treatment. The surgeon's clinical and technical experience and the patient are the factors that determine the choice of graft type for use in ACL surgery. PMID:27218084

  6. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    PubMed

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.

  7. 77 FR 65554 - Subcommittee for Dose Reconstruction Reviews (SDRR), Advisory Board on Radiation and Worker...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention Subcommittee... probability of causation guidelines that have been promulgated by the Department of Health and Human Services... reconstruction tools, presentation of the evolution of peer-review procedures, presentation of statistics...

  8. Darwinian perspectives on the evolution of human languages.

    PubMed

    Pagel, Mark

    2017-02-01

    Human languages evolve by a process of descent with modification in which parent languages give rise to daughter languages over time and in a manner that mimics the evolution of biological species. Descent with modification is just one of many parallels between biological and linguistic evolution that, taken together, offer up a Darwinian perspective on how languages evolve. Combined with statistical methods borrowed from evolutionary biology, this Darwinian perspective has brought new opportunities to the study of the evolution of human languages. These include the statistical inference of phylogenetic trees of languages, the study of how linguistic traits evolve over thousands of years of language change, the reconstruction of ancestral or proto-languages, and using language change to date historical events.

  9. Muscle Activity Map Reconstruction from High Density Surface EMG Signals With Missing Channels Using Image Inpainting and Surface Reconstruction Methods.

    PubMed

    Ghaderi, Parviz; Marateb, Hamid R

    2017-07-01

    The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.

  10. Hydrodynamic Simulations and Tomographic Reconstructions of the Intergalactic Medium

    NASA Astrophysics Data System (ADS)

    Stark, Casey William

    The Intergalactic Medium (IGM) is the dominant reservoir of matter in the Universe from which the cosmic web and galaxies form. The structure and physical state of the IGM provides insight into the cosmological model of the Universe, the origin and timeline of the reionization of the Universe, as well as being an essential ingredient in our understanding of galaxy formation and evolution. Our primary handle on this information is a signal known as the Lyman-alpha forest (or Ly-alpha forest) -- the collection of absorption features in high-redshift sources due to intervening neutral hydrogen, which scatters HI Ly-alpha photons out of the line of sight. The Ly-alpha forest flux traces density fluctuations at high redshift and at moderate overdensities, making it an excellent tool for mapping large-scale structure and constraining cosmological parameters. Although the computational methodology for simulating the Ly-alpha forest has existed for over a decade, we are just now approaching the scale of computing power required to simultaneously capture large cosmological scales and the scales of the smallest absorption systems. My thesis focuses on using simulations at the edge of modern computing to produce precise predictions of the statistics of the Ly-alpha forest and to better understand the structure of the IGM. In the first part of my thesis, I review the state of hydrodynamic simulations of the IGM, including pitfalls of the existing under-resolved simulations. Our group developed a new cosmological hydrodynamics code to tackle the computational challenge, and I developed a distributed analysis framework to compute flux statistics from our simulations. I present flux statistics derived from a suite of our large hydrodynamic simulations and demonstrate convergence to the per cent level. I also compare flux statistics derived from simulations using different discretizations and hydrodynamic schemes (Eulerian finite volume vs. smoothed particle hydrodynamics) and discuss differences in their convergence behavior, their overall agreement, and the implications for cosmological constraints. In the second part of my thesis, I present a tomographic reconstruction method that allows us to make 3D maps of the IGM with Mpc resolution. In order to make reconstructions of large surveys computationally feasible, I developed a new Wiener Filter application with an algorithm specialized to our problem, which significantly reduces the space and time complexity compared to previous implementations. I explore two scientific applications of the maps: finding protoclusters by searching the maps for large, contiguous regions of low flux and finding cosmic voids by searching the maps for regions of high flux. Using a large N-body simulation, I identify and characterize both protoclusters and voids at z = 2.5, in the middle of the redshift range being mapped by ongoing surveys. I provide simple methods for identifying protocluster and void candidates in the tomographic flux maps, and then test them on mock surveys and reconstructions. I present forecasts for sample purity and completeness and other scientific applications of these large, high-redshift objects.

  11. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Novel Data Reduction Based on Statistical Similarity

    DOE PAGES

    Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...

    2016-07-18

    Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less

  13. Improving stochastic estimates with inference methods: calculating matrix diagonals.

    PubMed

    Selig, Marco; Oppermann, Niels; Ensslin, Torsten A

    2012-02-01

    Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society

  14. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    PubMed

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less

  16. Fragility of estimated spatial temperature patterns in climate field reconstructions of the Common Era

    NASA Astrophysics Data System (ADS)

    Wang, J.; Emile-Geay, J.; Vaccaro, A.; Guillot, D.; Rajaratnam, B.

    2013-12-01

    Climate field reconstructions (CFRs) of the Common Era can provide insight into dynamical causes of low-frequency climate variability. For instance, the Mann et al. [2009] study found that the reconstructed sea-surface temperature difference between the Medieval Climate Anomaly and the Little Ice Age (hereinafter MCA - LIA) is marked by a La-Niña like pattern over the tropical Pacific, and proposed dynamical explanations for this observation. In this talk, we assess the robustness of such spatial patterns. First we examine the impact of the CFR methodology. Starting with the network of Mann et al. [2008] (hereinafter M08), we perform temperature reconstruction using four different CFR techniques: RegEM-TTLS [Schneider, 2001], the Mann et al. [2009] implementation of RegEM-TTLS (hereinafter M09), Canonical Correlation Analysis [Smerdon et al., 2010, CCA] and GraphEM [Guillot et al., in revision]. We find that results are greatly method-dependent even with identical inputs. While the M09 reconstruction displays a La Niña-like pattern over the tropical Pacific for MCA - LIA, CCA gives a neutral pattern, RegEM-TTLS and GraphEM both display El Niño-like pattern but show different amplitudes. Next we assess a given CFR technique's sensitivity to the selection of inputs. Proxies are selected based on the statistical significance of their correlations with HadCRUT3v annual temperature. A multiple hypothesis test [Ventura et al., 2004] is conducted to preclude spurious correlations. This choice has a large impact on resulting CFRs. In particular, whether the correlation is calculated between local or regional temperature-proxy pairs determines the number of significant records included in the proxy network. This in turn greatly affects the reconstructed spatial patterns and the Northern Hemispheric mean temperature time series with all CFR methods investigated. In order to further analyze CFRs' sensitivities to the abovementioned procedural choices, we assemble an updated multi-proxy network and produce a new 2000-year-long global temperature reconstruction. The network expands upon the existing M08 network by screening tree-ring proxies for the 'divergence problem' [D'Arrigo et al., 2008] and adds 58 non tree-ring proxies, of which 28 are located in the tropics and 11 are available within at least the past 1500 years. Overall, considerable differences are still evident among reconstructions using different CFR methods. Yet such differences are smaller using the updated proxy network compared with using the M08 network, consistent with pseudoproxy studies [Wang et al, 2013]. Our results collectively highlight the fragility of reconstructed patterns in the current state of proxy networks and CFR methods. We conclude that dynamical interpretations of such patterns are premature until these technical aspects are resolved. Reference: Wang, J., Emile-Geay, J., Guillot, D., Smerdon, J. E., and Rajaratnam, B.: Evaluating climate field reconstruction techniques using improved emulations of real-world conditions, Clim. Past Discuss., 9, 3015-3060, doi:10.5194/cpd-9-3015-2013, 2013.

  17. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were found on geometric measurement between NewTom VG 0.15 mm and NewTom VG 0.30 mm groups (P = 0.999) while a significant difference was found between VATECH DCTPRO 0.30 mm and NewTom VG 0.30 mm groups (P = 0.006). Conclusions: The 3D reconstruction from CBCT data can achieve a high linear, volumetric, and geometric accuracy. Increasing voxel resolution from 0.30 to 0.15 mm does not result in increased accuracy of 3D tooth reconstruction while different systems can affect the accuracy. PMID:27270544

  18. Global vegetation distribution and terrestrial climate evolution at the Eocene-Oligocene transition

    NASA Astrophysics Data System (ADS)

    Pound, Matthew; Salzmann, Ulrich

    2016-04-01

    The Eocene - Oligocene transition (EOT; ca. 34-33.5 Ma) is widely considered to be the biggest step in Cenozoic climate evolution. Geochemical marine records show both surface and bottom water cooling, associated with the expansion of Antarctic glaciers and a reduction in the atmospheric CO2 concentration. However, the global response of the terrestrial biosphere to the EOT is less well understood and not uniform when comparing different regions. We present new global vegetation and terrestrial climate reconstructions of the Priabonian (late Eocene; 38-33.9 Ma) and Rupelian (early Oligocene; 33.9-28.45 Ma) by synthesising 215 pollen and spore localities. Using presence/absence data of pollen and spores with multivariate statistics has allowed the reconstruction of palaeo-biomes without relying on modern analogues. The reconstructed palaeo-biomes do not show the equator-ward shift at the EOT, which would be expected from a global cooling. Reconstructions of mean annual temperature, cold month mean temperature and warm month mean temperature do not show a global cooling of terrestrial climate across the EOT. Our new reconstructions differ from previous global syntheses by being based on an internally consistent statistically defined classification of palaeo-biomes and our terrestrial based climate reconstructions are in stark contrast to some marine based climate estimates. Our results raise new questions on the nature and extent of terrestrial global climate change at the EOT.

  19. Psychosocial correlates of immediate versus delayed reconstruction of the breast.

    PubMed

    Wellisch, D K; Schain, W S; Noone, R B; Little, J W

    1985-11-01

    Two groups of consecutive patients from two different plastic surgical practice populations were evaluated to determine psychosocial differences between those who underwent immediate (n = 25) versus delayed (n = 38) breast reconstruction. Psychological assessment consisted of a standardized symptom inventory (BSI) and a specially designed self-report questionnaire investigating reactions unique to mastectomy and reconstruction. Both groups were extremely equivalent with regard to sociodemographic data, with the typical subject being a well-educated and employed Caucasian wife. Verbal reports of physical complaints revealed no significant differences between the two groups except for difficulty with arm movement, which was statistically higher for the immediate group (p = 0.006.). This difference most likely was due to the axillary dissection being performed simultaneously at the time of reconstruction. The relationship between timing of reconstruction and self-reported distress over the mastectomy experience revealed that only 25 percent of the women who underwent immediate repair reported "high distress" in recalling their mastectomy surgery compared with 60 percent of the delayed reconstruction group (p = 0.02). In reference to the two scales measuring psychological symptoms, a general trend was present, with the delayed group scoring higher (although not statistically significantly) on 9 of our 12 scales. Ninety-six percent of the immediate group and 89 percent of the delayed group reported satisfaction with results.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Anchoring quartet-based phylogenetic distances and applications to species tree reconstruction.

    PubMed

    Sayyari, Erfan; Mirarab, Siavash

    2016-11-11

    Inferring species trees from gene trees using the coalescent-based summary methods has been the subject of much attention, yet new scalable and accurate methods are needed. We introduce DISTIQUE, a new statistically consistent summary method for inferring species trees from gene trees under the coalescent model. We generalize our results to arbitrary phylogenetic inference problems; we show that two arbitrarily chosen leaves, called anchors, can be used to estimate relative distances between all other pairs of leaves by inferring relevant quartet trees. This results in a family of distance-based tree inference methods, with running times ranging between quadratic to quartic in the number of leaves. We show in simulated studies that DISTIQUE has comparable accuracy to leading coalescent-based summary methods and reduced running times.

Top