Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
NASA Astrophysics Data System (ADS)
Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.
2013-10-01
At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Free energy reconstruction from steered dynamics without post-processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athenes, Manuel, E-mail: Manuel.Athenes@cea.f; Condensed Matter and Materials Division, Physics and Life Sciences Directorate, LLNL, Livermore, CA 94551; Marinica, Mihai-Cosmin
2010-09-20
Various methods achieving importance sampling in ensembles of nonequilibrium trajectories enable one to estimate free energy differences and, by maximum-likelihood post-processing, to reconstruct free energy landscapes. Here, based on Bayes theorem, we propose a more direct method in which a posterior likelihood function is used both to construct the steered dynamics and to infer the contribution to equilibrium of all the sampled states. The method is implemented with two steering schedules. First, using non-autonomous steering, we calculate the migration barrier of the vacancy in Fe-{alpha}. Second, using an autonomous scheduling related to metadynamics and equivalent to temperature-accelerated molecular dynamics, wemore » accurately reconstruct the two-dimensional free energy landscape of the 38-atom Lennard-Jones cluster as a function of an orientational bond-order parameter and energy, down to the solid-solid structural transition temperature of the cluster and without maximum-likelihood post-processing.« less
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.
Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan
2018-03-01
Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D
2004-10-01
Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Superfast maximum-likelihood reconstruction for quantum tomography
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Spatial resolution properties of motion-compensated tomographic image reconstruction methods.
Chun, Se Young; Fessler, Jeffrey A
2012-07-01
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.
Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.
2016-01-15
Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
NASA Astrophysics Data System (ADS)
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Pascazio, Vito; Schirinzi, Gilda
2002-01-01
In this paper, a technique that is able to reconstruct highly sloped and discontinuous terrain height profiles, starting from multifrequency wrapped phase acquired by interferometric synthetic aperture radar (SAR) systems, is presented. We propose an innovative unwrapping method, based on a maximum likelihood estimation technique, which uses multifrequency independent phase data, obtained by filtering the interferometric SAR raw data pair through nonoverlapping band-pass filters, and approximating the unknown surface by means of local planes. Since the method does not exploit the phase gradient, it assures the uniqueness of the solution, even in the case of highly sloped or piecewise continuous elevation patterns with strong discontinuities.
Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-06-10
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method
NASA Astrophysics Data System (ADS)
Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David
2004-05-01
A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.
Recreating a functional ancestral archosaur visual pigment.
Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P
2002-09-01
The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.
Ziemann, Christian; Stille, Maik; Cremers, Florian; Buzug, Thorsten M; Rades, Dirk
2018-04-17
Metal artifacts caused by high-density implants lead to incorrectly reconstructed Hounsfield units in computed tomography images. This can result in a loss of accuracy in dose calculation in radiation therapy. This study investigates the potential of the metal artifact reduction algorithms, Augmented Likelihood Image Reconstruction and linear interpolation, in improving dose calculation in the presence of metal artifacts. In order to simulate a pelvis with a double-sided total endoprosthesis, a polymethylmethacrylate phantom was equipped with two steel bars. Artifacts were reduced by applying the Augmented Likelihood Image Reconstruction, a linear interpolation, and a manual correction approach. Using the treatment planning system Eclipse™, identical planning target volumes for an idealized prostate as well as structures for bladder and rectum were defined in corrected and noncorrected images. Volumetric modulated arc therapy plans have been created with double arc rotations with and without avoidance sectors that mask out the prosthesis. The irradiation plans were analyzed for variations in the dose distribution and their homogeneity. Dosimetric measurements were performed using isocentric positioned ionization chambers. Irradiation plans based on images containing artifacts lead to a dose error in the isocenter of up to 8.4%. Corrections with the Augmented Likelihood Image Reconstruction reduce this dose error to 2.7%, corrections with linear interpolation to 3.2%, and manual artifact correction to 4.1%. When applying artifact correction, the dose homogeneity was slightly improved for all investigated methods. Furthermore, the calculated mean doses are higher for rectum and bladder if avoidance sectors are applied. Streaking artifacts cause an imprecise dose calculation within irradiation plans. Using a metal artifact correction algorithm, the planning accuracy can be significantly improved. Best results were accomplished using the Augmented Likelihood Image Reconstruction algorithm. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma
NASA Astrophysics Data System (ADS)
Seibert, Stanley; Latorre, Anthony
2012-03-01
We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.
Sheehan, Joanne; Sherman, Kerry A; Lam, Thomas; Boyages, John
2008-01-01
This study investigated the influence of psychosocial and surgical factors on decision regret among 123 women diagnosed with breast cancer who had undergone immediate (58%) or delayed (42%) breast reconstruction following mastectomy. The majority of participants (52.8%, n = 65) experienced no decision regret, 27.6% experienced mild regret and 19.5% moderate to strong regret. Bivariate analyses indicated that decision regret was associated with negative body image and psychological distress - intrusion and avoidance. There were no differences in decision regret either with respect to methods or timing patterns of reconstructive surgery. Multinominal logistic regression analysis showed that, when controlling for mood state and time since last reconstructive procedure, increases in negative body image were associated with increased likelihood of experiencing decision regret. These findings highlight the need for optimal input from surgeons and therapists in order to promote realistic expectations regarding the outcome of breast reconstruction and to reduce the likelihood of women experiencing decision regret.
Ellis, Sam; Reader, Andrew J
2018-04-26
Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example, to observe and quantitate changes in functional behaviour in tumors after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalizing voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high-activity lesions. Here, we present two additional novel longitudinal difference-image priors and evaluate their performance using two-dimesional (2D) simulation studies and a three-dimensional (3D) real dataset case study. We have previously proposed a simultaneous difference-image-based penalized maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have (a) low entropy (DE-PML), and (b) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D-simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumor datasets and compared to standard maximum likelihood expectation-maximization (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumor behaviour, and interscan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard reconstructions with increased counts levels. In tumor regions, each method produces subtly different results in terms of preservation of tumor quantitation and reconstruction root mean-squared error (RMSE). In particular, in the two-scan simulations, the DE-PML method produced tumor means in close agreement with MLEM reconstructions, while the DTV-PML method produced the lowest errors due to noise reduction within the tumor. Across a range of tumor responses and different numbers of scans, similar results were observed, with DTV-PML producing the lowest errors of the three priors and DE-PML producing the lowest bias. Similar improvements were observed in the reconstructions of the real longitudinal datasets, although imperfect alignment of the two PET images resulted in additional changes in the difference image that affected the performance of the proposed methods. Reconstruction of longitudinal datasets by penalizing difference images between pairs of scans from a data series allows for noise reduction in all reconstructed images. An appropriate choice of penalty term and penalty strength allows for this noise reduction to be achieved while maintaining reconstruction performance in regions of change, either in terms of quantitation of mean intensity via DE-PML, or in terms of tumor RMSE via DTV-PML. Overall, improving the image quality of longitudinal datasets via simultaneous reconstruction has the potential to improve upon currently used methods, allow dose reduction, or reduce scan time while maintaining image quality at current levels. © 2018 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Harbert, Robert S; Nixon, Kevin C
2015-08-01
• Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.
Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach
Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474
Maximum likelihood of phylogenetic networks.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2006-11-01
Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Efficient Exploration of the Space of Reconciled Gene Trees
Szöllősi, Gergely J.; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-01-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree–species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree–species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source implementation of ALE is available from https://github.com/ssolo/ALE.git. [amalgamation; gene tree reconciliation; gene tree reconstruction; lateral gene transfer; phylogeny.] PMID:23925510
Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.
2015-01-01
Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019
The historical biogeography of Mammalia
Springer, Mark S.; Meredith, Robert W.; Janecka, Jan E.; Murphy, William J.
2011-01-01
Palaeobiogeographic reconstructions are underpinned by phylogenies, divergence times and ancestral area reconstructions, which together yield ancestral area chronograms that provide a basis for proposing and testing hypotheses of dispersal and vicariance. Methods for area coding include multi-state coding with a single character, binary coding with multiple characters and string coding. Ancestral reconstruction methods are divided into parsimony versus Bayesian/likelihood approaches. We compared nine methods for reconstructing ancestral areas for placental mammals. Ambiguous reconstructions were a problem for all methods. Important differences resulted from coding areas based on the geographical ranges of extant species versus the geographical provenance of the oldest fossil for each lineage. Africa and South America were reconstructed as the ancestral areas for Afrotheria and Xenarthra, respectively. Most methods reconstructed Eurasia as the ancestral area for Boreoeutheria, Euarchontoglires and Laurasiatheria. The coincidence of molecular dates for the separation of Afrotheria and Xenarthra at approximately 100 Ma with the plate tectonic sundering of Africa and South America hints at the importance of vicariance in the early history of Placentalia. Dispersal has also been important including the origins of Madagascar's endemic mammal fauna. Further studies will benefit from increased taxon sampling and the application of new ancestral area reconstruction methods. PMID:21807730
Resch, K J; Walther, P; Zeilinger, A
2005-02-25
We have performed the first experimental tomographic reconstruction of a three-photon polarization state. Quantum state tomography is a powerful tool for fully describing the density matrix of a quantum system. We measured 64 three-photon polarization correlations and used a "maximum-likelihood" reconstruction method to reconstruct the Greenberger-Horne-Zeilinger state. The entanglement class has been characterized using an entanglement witness operator and the maximum predicted values for the Mermin inequality were extracted.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
NASA Astrophysics Data System (ADS)
Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.
2013-11-01
Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.
Single particle maximum likelihood reconstruction from superresolution microscopy images
Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin
2017-01-01
Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation. PMID:28253349
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors
Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.
2009-01-01
In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Gang, Grace J.; Lee, Junghoon; Wong, John; Stayman, J. Webster
2017-03-01
Purpose: There are many clinical situations where diagnostic CT is used for an initial diagnosis or treatment planning, followed by one or more CBCT scans that are part of an image-guided intervention. Because the high-quality diagnostic CT scan is a rich source of patient-specific anatomical knowledge, this provides an opportunity to incorporate the prior CT image into subsequent CBCT reconstruction for improved image quality. We propose a penalized-likelihood method called reconstruction of difference (RoD), to directly reconstruct differences between the CBCT scan and the CT prior. In this work, we demonstrate the efficacy of RoD with clinical patient datasets. Methods: We introduce a data processing workflow using the RoD framework to reconstruct anatomical changes between the prior CT and current CBCT. This workflow includes processing steps to account for non-anatomical differences between the two scans including 1) scatter correction for CBCT datasets due to increased scatter fractions in CBCT data; 2) histogram matching for attenuation variations between CT and CBCT; and 3) registration for different patient positioning. CBCT projection data and CT planning volumes for two radiotherapy patients - one abdominal study and one head-and-neck study - were investigated. Results: In comparisons between the proposed RoD framework and more traditional FDK and penalized-likelihood reconstructions, we find a significant improvement in image quality when prior CT information is incorporated into the reconstruction. RoD is able to provide additional low-contrast details while correctly incorporating actual physical changes in patient anatomy. Conclusions: The proposed framework provides an opportunity to either improve image quality or relax data fidelity constraints for CBCT imaging when prior CT studies of the same patient are available. Possible clinical targets include CBCT image-guided radiotherapy and CBCT image-guided surgeries.
Cusimano, Natalie; Sousa, Aretuza; Renner, Susanne S.
2012-01-01
Background and Aims For 84 years, botanists have relied on calculating the highest common factor for series of haploid chromosome numbers to arrive at a so-called basic number, x. This was done without consistent (reproducible) reference to species relationships and frequencies of different numbers in a clade. Likelihood models that treat polyploidy, chromosome fusion and fission as events with particular probabilities now allow reconstruction of ancestral chromosome numbers in an explicit framework. We have used a modelling approach to reconstruct chromosome number change in the large monocot family Araceae and to test earlier hypotheses about basic numbers in the family. Methods Using a maximum likelihood approach and chromosome counts for 26 % of the 3300 species of Araceae and representative numbers for each of the other 13 families of Alismatales, polyploidization events and single chromosome changes were inferred on a genus-level phylogenetic tree for 113 of the 117 genera of Araceae. Key Results The previously inferred basic numbers x = 14 and x = 7 are rejected. Instead, maximum likelihood optimization revealed an ancestral haploid chromosome number of n = 16, Bayesian inference of n = 18. Chromosome fusion (loss) is the predominant inferred event, whereas polyploidization events occurred less frequently and mainly towards the tips of the tree. Conclusions The bias towards low basic numbers (x) introduced by the algebraic approach to inferring chromosome number changes, prevalent among botanists, may have contributed to an unrealistic picture of ancestral chromosome numbers in many plant clades. The availability of robust quantitative methods for reconstructing ancestral chromosome numbers on molecular phylogenetic trees (with or without branch length information), with confidence statistics, makes the calculation of x an obsolete approach, at least when applied to large clades. PMID:22210850
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.
Al-Atiyat, R M; Aljumaah, R S
2014-08-27
This study aimed to estimate evolutionary distances and to reconstruct phylogeny trees between different Awassi sheep populations. Thirty-two sheep individuals from three different geographical areas of Jordan and the Kingdom of Saudi Arabia (KSA) were randomly sampled. DNA was extracted from the tissue samples and sequenced using the T7 promoter universal primer. Different phylogenetic trees were reconstructed from 0.64-kb DNA sequences using the MEGA software with the best general time reverse distance model. Three methods of distance estimation were then used. The maximum composite likelihood test was considered for reconstructing maximum likelihood, neighbor-joining and UPGMA trees. The maximum likelihood tree indicated three major clusters separated by cytosine (C) and thymine (T). The greatest distance was shown between the South sheep and North sheep. On the other hand, the KSA sheep as an outgroup showed shorter evolutionary distance to the North sheep population than to the others. The neighbor-joining and UPGMA trees showed quite reliable clusters of evolutionary differentiation of Jordan sheep populations from the Saudi population. The overall results support geographical information and ecological types of the sheep populations studied. Summing up, the resulting phylogeny trees may contribute to the limited information about the genetic relatedness and phylogeny of Awassi sheep in nearby Arab countries.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Model-based tomographic reconstruction of objects containing known components.
Stayman, J Webster; Otake, Yoshito; Prince, Jerry L; Khanna, A Jay; Siewerdsen, Jeffrey H
2012-10-01
The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Simultaneous maximum a posteriori longitudinal PET image reconstruction
NASA Astrophysics Data System (ADS)
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-08
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored.
Inferring the parameters of a Markov process from snapshots of the steady state
NASA Astrophysics Data System (ADS)
Dettmer, Simon L.; Berg, Johannes
2018-02-01
We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.
Subpixel based defocused points removal in photon-limited volumetric dataset
NASA Astrophysics Data System (ADS)
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Maraka, Harsha Vardhan R.; Ryle, James P.; Sheridan, John T.
2017-03-01
The asymptotic property of the maximum likelihood estimator (MLE) has been utilized to reconstruct three-dimensional (3D) sectional images in the photon counting imaging (PCI) regime. At first, multiple 2D intensity images, known as Elemental images (EI), are captured. Then the geometric ray-tracing method is employed to reconstruct the 3D sectional images at various depth cues. We note that a 3D sectional image consists of both focused and defocused regions, depending on the reconstructed depth position. The defocused portion is redundant and should be removed in order to facilitate image analysis e.g., 3D object tracking, recognition, classification and navigation. In this paper, we present a subpixel level three-step based technique (i.e. involving adaptive thresholding, boundary detection and entropy based segmentation) to discard the defocused sparse-samples from the reconstructed photon-limited 3D sectional images. Simulation results are presented demonstrating the feasibility and efficiency of the proposed method.
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile
The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been usedmore » to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.« less
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
NASA Astrophysics Data System (ADS)
David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias
2012-03-01
Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y
2015-06-15
Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone andmore » with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that significant further development will be required to make CBCT useful to adaptive radiotherapy.« less
Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei
2015-01-21
Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
NASA Astrophysics Data System (ADS)
Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.
2016-03-01
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
NASA Astrophysics Data System (ADS)
Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.
2014-09-01
A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Region of interest processing for iterative reconstruction in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.
2015-03-01
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
Sinogram restoration in computed tomography with an edge-preserving penalty
Little, Kevin J.; La Rivière, Patrick J.
2015-01-01
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogates (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data. PMID:25735286
Sinogram restoration in computed tomography with an edge-preserving penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Kevin J., E-mail: little@uchicago.edu; La Rivière, Patrick J.
2015-03-15
Purpose: With the goal of producing a less computationally intensive alternative to fully iterative penalized-likelihood image reconstruction, our group has explored the use of penalized-likelihood sinogram restoration for transmission tomography. Previously, we have exclusively used a quadratic penalty in our restoration objective function. However, a quadratic penalty does not excel at preserving edges while reducing noise. Here, we derive a restoration update equation for nonquadratic penalties. Additionally, we perform a feasibility study to extend our sinogram restoration method to a helical cone-beam geometry and clinical data. Methods: A restoration update equation for nonquadratic penalties is derived using separable parabolic surrogatesmore » (SPS). A method for calculating sinogram degradation coefficients for a helical cone-beam geometry is proposed. Using simulated data, sinogram restorations are performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods are used to obtain reconstructions, and resolution-noise trade-offs are investigated. For the fan-beam geometry, a comparison is made to image-domain SPS reconstruction using the Huber penalty. The effects of varying object size and contrast are also investigated. For the helical cone-beam geometry, we investigate the effect of helical pitch (axial movement/rotation). Huber-penalty sinogram restoration is performed on 3D clinical data, and the reconstructed images are compared to those generated with no restoration. Results: We find that by applying the edge-preserving Huber penalty to our sinogram restoration methods, the reconstructed image has a better resolution-noise relationship than an image produced using a quadratic penalty in the sinogram restoration. However, we find that this relatively straightforward approach to edge preservation in the sinogram domain is affected by the physical size of imaged objects in addition to the contrast across the edge. This presents some disadvantages of this method relative to image-domain edge-preserving methods, although the computational burden of the sinogram-domain approach is much lower. For a helical cone-beam geometry, we found applying sinogram restoration in 3D was reasonable and that pitch did not make a significant difference in the general effect of sinogram restoration. The application of Huber-penalty sinogram restoration to clinical data resulted in a reconstruction with less noise while retaining resolution. Conclusions: Sinogram restoration with the Huber penalty is able to provide better resolution-noise performance than restoration with a quadratic penalty. Additionally, sinogram restoration with the Huber penalty is feasible for helical cone-beam CT and can be applied to clinical data.« less
Evaluation of two methods for using MR information in PET reconstruction
NASA Astrophysics Data System (ADS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-02-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.
SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital
2016-06-15
Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less
Understanding the factors that influence breast reconstruction decision making in Australian women.
Somogyi, Ron Barry; Webb, Angela; Baghdikian, Nairy; Stephenson, John; Edward, Karen-Leigh; Morrison, Wayne
2015-04-01
Breast reconstruction is safe and improves quality of life. Despite this, many women do not undergo breast reconstruction and the reasons for this are poorly understood. This study aims to identify the factors that influence a woman's decision whether or not to have breast reconstruction and to better understand their attitudes toward reconstruction. An online survey was distributed to breast cancer patients from Breast Cancer Network Australia. Results were tabulated, described qualitatively and analyzed for significance using a multiple logistic regression model. 501 mastectomy patients completed surveys, of which 62% had undergone breast reconstruction. Factors that positively influenced likelihood of reconstruction included lower age, bilateral mastectomy, access to private hospitals, decreased home/work responsibilities, increased level of home support and early discussion of reconstructive options. Most common reasons for avoiding reconstruction included "I don't feel the need" and "I don't want more surgery". The most commonly sited sources of reconstruction information came from the breast surgeon followed by the plastic surgeon then the breast cancer nurse and the most influential of these was the plastic surgeon. A model using factors easily obtained on clinical history can be used to understand likelihood of reconstruction. This knowledge may help identify barriers to reconstruction, ultimately improving the clinicians' ability to appropriately educate mastectomy patients and ensure effective decision making around breast reconstruction. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, Kyle; Marleau, Peter; Brubaker, Erik
In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed imagemore » quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.« less
Yu, Anthony; Prentice, Heather A; Burfeind, William E; Funahashi, Tadashi; Maletis, Gregory B
2018-03-01
Allograft tissue is frequently used in anterior cruciate ligament reconstruction (ACLR). It is often irradiated and/or chemically processed to decrease the risk of disease transmission, but some tissue is aseptically harvested without further processing. Irradiated and chemically processed allograft tissue appears to have a higher risk of revision, but whether this processing decreases the risk of infection is not clear. To determine the incidence of deep surgical site infection after ACLR with allograft in a large community-based sample and to evaluate the association of allograft processing and the risk of deep infection. Cohort study; Level of evidence, 3. The authors conducted a cohort study using the Kaiser Permanente Anterior Cruciate Ligament Reconstruction Registry. Primary isolated unilateral ACLR with allograft were identified from February 1, 2005 to September 30, 2015. Ninety-day postoperative deep infections were identified via an electronic screening algorithm and then validated through chart review. Logistic regression was used to evaluate the likelihood of 90-day postoperative deep infection per allograft processing method: processed (graft treated chemically and/or irradiated) or nonprocessed (graft not irradiated or chemically processed). Of 10,190 allograft cases, 8425 (82.7%) received a processed allograft, and 1765 (17.3%) received a nonprocessed allograft. There were 15 (0.15%) deep infections during the study period: 4 (26.7%) coagulase-negative Staphylococcus, 4 (26.7%) methicillin-sensitive Staphylococcus aureus, 1 (6.7%) Peptostreptococcus micros, and 6 (40.0%) with no growth. There was no difference in the likelihood for 90-day deep infection for processed versus nonprocessed allografts (odds ratio = 1.36, 95% CI = 0.31-6.04). The overall incidence of deep infection after ACLR with allograft tissue was very low (0.15%), suggesting that the methods currently employed by tissue banks to minimize the risk of infection are effective. In this cohort, no difference in the likelihood of infection between processed and nonprocessed allografts could be identified.
Assessment of phylogenetic sensitivity for reconstructing HIV-1 epidemiological relationships.
Beloukas, Apostolos; Magiorkinis, Emmanouil; Magiorkinis, Gkikas; Zavitsanou, Asimina; Karamitros, Timokratis; Hatzakis, Angelos; Paraskevis, Dimitrios
2012-06-01
Phylogenetic analysis has been extensively used as a tool for the reconstruction of epidemiological relations for research or for forensic purposes. It was our objective to assess the sensitivity of different phylogenetic methods and various phylogenetic programs to reconstruct epidemiological links among HIV-1 infected patients that is the probability to reveal a true transmission relationship. Multiple datasets (90) were prepared consisting of HIV-1 sequences in protease (PR) and partial reverse transcriptase (RT) sampled from patients with documented epidemiological relationship (target population), and from unrelated individuals (control population) belonging to the same HIV-1 subtype as the target population. Each dataset varied regarding the number, the geographic origin and the transmission risk groups of the sequences among the control population. Phylogenetic trees were inferred by neighbor-joining (NJ), maximum likelihood heuristics (hML) and Bayesian methods. All clusters of sequences belonging to the target population were correctly reconstructed by NJ and Bayesian methods receiving high bootstrap and posterior probability (PP) support, respectively. On the other hand, TreePuzzle failed to reconstruct or provide significant support for several clusters; high puzzling step support was associated with the inclusion of control sequences from the same geographic area as the target population. In contrary, all clusters were correctly reconstructed by hML as implemented in PhyML 3.0 receiving high bootstrap support. We report that under the conditions of our study, hML using PhyML, NJ and Bayesian methods were the most sensitive for the reconstruction of epidemiological links mostly from sexually infected individuals. Copyright © 2012 Elsevier B.V. All rights reserved.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
NASA Astrophysics Data System (ADS)
Lim, Hongki; Fessler, Jeffrey A.; Wilderman, Scott J.; Brooks, Allen F.; Dewaraja, Yuni K.
2018-06-01
While the yield of positrons used in Y-90 PET is independent of tissue media, Y-90 SPECT imaging is complicated by the tissue dependence of bremsstrahlung photon generation. The probability of bremsstrahlung production is proportional to the square of the atomic number of the medium. Hence, the same amount of activity in different tissue regions of the body will produce different numbers of bremsstrahlung photons. Existing reconstruction methods disregard this tissue-dependency, potentially impacting both qualitative and quantitative imaging of heterogeneous regions of the body such as bone with marrow cavities. In this proof-of-concept study, we propose a new maximum-likelihood method that incorporates bremsstrahlung generation probabilities into the system matrix, enabling images of the desired Y-90 distribution to be reconstructed instead of the ‘bremsstrahlung distribution’ that is obtained with existing methods. The tissue-dependent probabilities are generated by Monte Carlo simulation while bone volume fractions for each SPECT voxel are obtained from co-registered CT. First, we demonstrate the tissue dependency in a SPECT/CT imaging experiment with Y-90 in bone equivalent solution and water. Visually, the proposed reconstruction approach better matched the true image and the Y-90 PET image than the standard bremsstrahlung reconstruction approach. An XCAT phantom simulation including bone and marrow regions also demonstrated better agreement with the true image using the proposed reconstruction method. Quantitatively, compared with the standard reconstruction, the new method improved estimation of the liquid bone:water activity concentration ratio by 40% in the SPECT measurement and the cortical bone:marrow activity concentration ratio by 58% in the XCAT simulation.
NASA Astrophysics Data System (ADS)
Tang, Jiayu; Kayo, Issha; Takada, Masahiro
2011-09-01
We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
NASA Astrophysics Data System (ADS)
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Challenges in Species Tree Estimation Under the Multispecies Coalescent Model
Xu, Bo; Yang, Ziheng
2016-01-01
The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902
Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.
Dick, Bernhard
2014-01-14
A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
NASA Technical Reports Server (NTRS)
Schmahl, Edward J.; Kundu, Mukul R.
1998-01-01
We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Viscoelastic property identification from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-05-01
An inverse method is proposed for the determination of the viscoelastic properties of material plates from the plane-wave transmitted acoustic field. Innovations lie in a two-step inversion scheme based on the well-known maximum-likelihood principle with an analytic signal formulation. In addition, establishing the analytical formulations of the plate transmission coefficient we implement an efficient and slightly noise-sensitive process suited to both very thin plates and strongly dispersive media.
NASA Astrophysics Data System (ADS)
Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors
2018-05-01
The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
MAP Reconstruction for Fourier Rebinned TOF-PET Data
Bai, Bing; Lin, Yanguang; Zhu, Wentao; Ren, Ran; Li, Quanzheng; Dahlbom, Magnus; DiFilippo, Frank; Leahy, Richard M.
2014-01-01
Time-of-flight (TOF) information improves signal to noise ratio in Positron Emission Tomography (PET). Computation cost in processing TOF-PET sinograms is substantially higher than for nonTOF data because the data in each line of response is divided among multiple time of flight bins. This additional cost has motivated research into methods for rebinning TOF data into lower dimensional representations that exploit redundancies inherent in TOF data. We have previously developed approximate Fourier methods that rebin TOF data into either 3D nonTOF or 2D nonTOF formats. We refer to these methods respectively as FORET-3D and FORET-2D. Here we describe maximum a posteriori (MAP) estimators for use with FORET rebinned data. We first derive approximate expressions for the variance of the rebinned data. We then use these results to rescale the data so that the variance and mean are approximately equal allowing us to use the Poisson likelihood model for MAP reconstruction. MAP reconstruction from these rebinned data uses a system matrix in which the detector response model accounts for the effects of rebinning. Using these methods we compare performance of FORET-2D and 3D with TOF and nonTOF reconstructions using phantom and clinical data. Our phantom results show a small loss in contrast recovery at matched noise levels using FORET compared to reconstruction from the original TOF data. Clinical examples show FORET images that are qualitatively similar to those obtained from the original TOF-PET data but a small increase in variance at matched resolution. Reconstruction time is reduced by a factor of 5 and 30 using FORET3D+MAP and FORET2D+MAP respectively compared to 3D TOF MAP, which makes these methods attractive for clinical applications. PMID:24504374
Idris A, Elbakri; Fessler, Jeffrey A
2003-08-07
This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Yu, Shih-Heng; Chang, Dong-Shang
2014-01-01
This study investigates the risk factors in railway reconstruction project through complete literature reviews on construction project risks and scrutinizing experiences and challenges of railway reconstructions in Taiwan. Based on the identified risk factors, an assessing framework based on the fuzzy multicriteria decision-making (fuzzy MCDM) approach to help construction agencies build awareness of the critical risk factors on the execution of railway reconstruction project, measure the impact and occurrence likelihood for these risk factors. Subjectivity, uncertainty and vagueness within the assessment process are dealt with using linguistic variables parameterized by trapezoid fuzzy numbers. By multiplying the degree of impact and the occurrence likelihood of risk factors, estimated severity values of each identified risk factor are determined. Based on the assessment results, the construction agencies were informed of what risks should be noticed and what they should do to avoid the risks. That is, it enables construction agencies of railway reconstruction to plan the appropriate risk responses/strategies to increase the opportunity of project success and effectiveness. PMID:24772014
The phylogenetic relationships of known mosquito (Diptera: Culicidae) mitogenomes.
Chu, Hongliang; Li, Chunxiao; Guo, Xiaoxia; Zhang, Hengduan; Luo, Peng; Wu, Zhonghua; Wang, Gang; Zhao, Tongyan
2018-01-01
The known mosquito mitogenomes, containing a total of 34 species, which belong to five genera, were collected from GenBank, and the practicality and effectiveness of the variation in the complete mitochondrial DNA genome and portions of mitochondrial COI gene were assessed to reconstruct the phylogeny of mosquitoes. Phylogenetic trees were reconstructed on the basis of parsimony, maximum likelihood, and Bayesian (BI) methods. It is concluded that: (1) Both mitogenomes and COI gene support the monophly of following taxa: Subgenus Nyssorhynchus, Subgenus Cellia, Anopheles albitarsis complex, Anopheles gambiae complex, and Anopheles punctulatus group; (2) Genus Aedes is not monophyletic relative to Ochlerotatus vigilax; (3) The mitogenome results indicate a close relationship between Anopheles epiroticus and Anopheles gambiae complex, Anopheles dirus complex and Anopheles punctulatus group, respectively; (4) The Bayesian posterior probability (BPP) within phylogenetic tree reconstructed by mitogenomes is higher than COI tree. The results show that phylogenetic relationships reconstructed using the mitogenomes were more similar to those based on morphological data.
Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F
2012-08-01
The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.
Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro
2018-03-01
The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Deng, Wenping; Zhang, Kui; Liu, Sanzhen; Zhao, Patrick; Xu, Shizhong; Wei, Hairong
2018-04-30
Joint reconstruction of multiple gene regulatory networks (GRNs) using gene expression data from multiple tissues/conditions is very important for understanding common and tissue/condition-specific regulation. However, there are currently no computational models and methods available for directly constructing such multiple GRNs that not only share some common hub genes but also possess tissue/condition-specific regulatory edges. In this paper, we proposed a new graphic Gaussian model for joint reconstruction of multiple gene regulatory networks (JRmGRN), which highlighted hub genes, using gene expression data from several tissues/conditions. Under the framework of Gaussian graphical model, JRmGRN method constructs the GRNs through maximizing a penalized log likelihood function. We formulated it as a convex optimization problem, and then solved it with an alternating direction method of multipliers (ADMM) algorithm. The performance of JRmGRN was first evaluated with synthetic data and the results showed that JRmGRN outperformed several other methods for reconstruction of GRNs. We also applied our method to real Arabidopsis thaliana RNA-seq data from two light regime conditions in comparison with other methods, and both common hub genes and some conditions-specific hub genes were identified with higher accuracy and precision. JRmGRN is available as a R program from: https://github.com/wenpingd. hairong@mtu.edu. Proof of theorem, derivation of algorithm and supplementary data are available at Bioinformatics online.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Riviere, P La
2014-06-15
Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs weremore » investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty, sinogram restoration leads to an improvement in resolution-variance tradeoffs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, A; Stayman, J; Otake, Y
Purpose: To address the challenges of image quality, radiation dose, and reconstruction speed in intraoperative cone-beam CT (CBCT) for neurosurgery by combining model-based image reconstruction (MBIR) with accelerated algorithmic and computational methods. Methods: Preclinical studies involved a mobile C-arm for CBCT imaging of two anthropomorphic head phantoms that included simulated imaging targets (ventricles, soft-tissue structures/bleeds) and neurosurgical procedures (deep brain stimulation (DBS) electrode insertion) for assessment of image quality. The penalized likelihood (PL) framework was used for MBIR, incorporating a statistical model with image regularization via an edgepreserving penalty. To accelerate PL reconstruction, the ordered-subset, separable quadratic surrogates (OS-SQS) algorithmmore » was modified to incorporate Nesterov's method and implemented on a multi-GPU system. A fair comparison of image quality between PL and conventional filtered backprojection (FBP) was performed by selecting reconstruction parameters that provided matched low-contrast spatial resolution. Results: CBCT images of the head phantoms demonstrated that PL reconstruction improved image quality (∼28% higher CNR) even at half the radiation dose (3.3 mGy) compared to FBP. A combination of Nesterov's method and fast projectors yielded a PL reconstruction run-time of 251 sec (cf., 5729 sec for OS-SQS, 13 sec for FBP). Insertion of a DBS electrode resulted in severe metal artifact streaks in FBP reconstructions, whereas PL was intrinsically robust against metal artifact. The combination of noise and artifact was reduced from 32.2 HU in FBP to 9.5 HU in PL, thereby providing better assessment of device placement and potential complications. Conclusion: The methods can be applied to intraoperative CBCT for guidance and verification of neurosurgical procedures (DBS electrode insertion, biopsy, tumor resection) and detection of complications (intracranial hemorrhage). Significant improvement in image quality, dose reduction, and reconstruction time of ∼4 min will enable practical deployment of low-dose C-arm CBCT within the operating room. AAPM Research Seed Funding (2013-2014); NIH Fellowship F32EB017571; Siemens Healthcare (XP Division)« less
A maximum likelihood method for high resolution proton radiography/proton CT
NASA Astrophysics Data System (ADS)
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K. N.; Beaulieu, Luc; Seco, Joao
2016-12-01
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography’s spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm-1 to 4.53 lp cm-1 in the 200 MeV beam and from 3.49 lp cm-1 to 5.76 lp cm-1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm-1 to 5.76 lp cm-1) or conical beam (from 3.49 lp cm-1 to 5.56 lp cm-1). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm-1 for the parallel beam and from 3.03 to 5.15 lp cm-1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65 % ) in proton radiography and greatly accelerate proton computed tomography reconstruction.
A maximum likelihood method for high resolution proton radiography/proton CT.
Collins-Fekete, Charles-Antoine; Brousmiche, Sébastien; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao
2016-12-07
Multiple Coulomb scattering (MCS) is the largest contributor to blurring in proton imaging. In this work, we developed a maximum likelihood least squares estimator that improves proton radiography's spatial resolution. The water equivalent thickness (WET) through projections defined from the source to the detector pixels were estimated such that they maximizes the likelihood of the energy loss of every proton crossing the volume. The length spent in each projection was calculated through the optimized cubic spline path estimate. The proton radiographies were produced using Geant4 simulations. Three phantoms were studied here: a slanted cube in a tank of water to measure 2D spatial resolution, a voxelized head phantom for clinical performance evaluation as well as a parametric Catphan phantom (CTP528) for 3D spatial resolution. Two proton beam configurations were used: a parallel and a conical beam. Proton beams of 200 and 330 MeV were simulated to acquire the radiography. Spatial resolution is increased from 2.44 lp cm -1 to 4.53 lp cm -1 in the 200 MeV beam and from 3.49 lp cm -1 to 5.76 lp cm -1 in the 330 MeV beam. Beam configurations do not affect the reconstructed spatial resolution as investigated between a radiography acquired with the parallel (3.49 lp cm -1 to 5.76 lp cm -1 ) or conical beam (from 3.49 lp cm -1 to 5.56 lp cm -1 ). The improved images were then used as input in a photon tomography algorithm. The proton CT reconstruction of the Catphan phantom shows high spatial resolution (from 2.79 to 5.55 lp cm -1 for the parallel beam and from 3.03 to 5.15 lp cm -1 for the conical beam) and the reconstruction of the head phantom, although qualitative, shows high contrast in the gradient region. The proposed formulation of the optimization demonstrates serious potential to increase the spatial resolution (up by 65[Formula: see text]) in proton radiography and greatly accelerate proton computed tomography reconstruction.
McCann, Jamie; Stuessy, Tod F.; Villaseñor, Jose L.; Weiss-Schneeweiss, Hanna
2016-01-01
Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction. PMID:27611687
McCann, Jamie; Schneeweiss, Gerald M; Stuessy, Tod F; Villaseñor, Jose L; Weiss-Schneeweiss, Hanna
2016-01-01
Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction.
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Maximum likelihood estimation in calibrating a stereo camera setup.
Muijtjens, A M; Roos, J M; Arts, T; Hasman, A
1999-02-01
Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.
Attenuation correction in emission tomography using the emission data—A review
Li, Yusheng
2016-01-01
The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging. PMID:26843243
Attenuation correction in emission tomography using the emission data—A review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berker, Yannick, E-mail: berker@mail.med.upenn.edu; Li, Yusheng
2016-02-15
The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors thenmore » look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.« less
Zhi-Bin Wen; Ming-Li Zhang; Ge-Lin Zhu; Stewart C. Sanderson
2010-01-01
To reconstruct phylogeny and verify the monophyly of major subgroups, a total of 52 species representing almost all species of Salsoleae s.l. in China were sampled, with analysis based on three molecular markers (nrDNA ITS, cpDNA psbB-psbH and rbcL), using maximum parsimony, maximum likelihood, and Bayesian inference methods. Our molecular evidence provides strong...
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
The number of operations required for completing breast reconstruction.
Eom, Jin Sup; Kobayashi, Mark Robert; Paydar, Keyianoosh; Wirth, Garrett A; Evans, Gregory R D
2014-10-01
Breast reconstruction often requires multiple surgeries, which demands additional expense and time and is often contrary to the patient's expectation. The aim of this study was to review the number of operations that were needed for completion of breast reconstruction and to determine patient and clinical factors that influenced this number. We retrospectively reviewed the medical records of 254 cases of breast reconstructions (in 185 patients) that were performed between February 2005 and August 2009. We investigated the numbers of operations that were performed for individual case of breast reconstruction and analyzed the influence of variable factors. The purpose of the additional operations was also analyzed. The mean number of operations per breast was 2.37 (range, 1-9). The mean number of operations for mound creation was 2.24. Factors associated with an increased number of operation were use of an implant, contralateral symmetrization, complications, and nipple reconstruction. Considering the reconstruction method, either the use of a primary implant or the use of free abdominal tissue transfer demonstrated fewer surgeries than the use of an expander implant, and the number of operations using free transverse rectus abdominis musculocutaneous or deep inferior epigastric perforator flaps was less than the number of operations using pedicled transverse rectus abdominis musculocutaneous flaps. These data will aid in planning breast reconstruction surgery and will enable patients to be more informed regarding the likelihood of multiple surgeries.
Diffusion archeology for diffusion progression history reconstruction.
Sefer, Emre; Kingsford, Carl
2016-11-01
Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.
Diffusion archeology for diffusion progression history reconstruction
Sefer, Emre; Kingsford, Carl
2015-01-01
Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring — perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data. PMID:27821901
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, D.
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
NASA Astrophysics Data System (ADS)
Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.
2014-03-01
Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.
Reyes, Elisabeth; Nadot, Sophie; von Balthazar, Maria; Schönenberger, Jürg; Sauquet, Hervé
2018-06-21
Ancestral state reconstruction is an important tool to study morphological evolution and often involves estimating transition rates among character states. However, various factors, including taxonomic scale and sampling density, may impact transition rate estimation and indirectly also the probability of the state at a given node. Here, we test the influence of rate heterogeneity using maximum likelihood methods on five binary perianth characters, optimized on a phylogenetic tree of angiosperms including 1230 species sampled from all families. We compare the states reconstructed by an equal-rate (Mk1) and a two-rate model (Mk2) fitted either with a single set of rates for the whole tree or as a partitioned model, allowing for different rates on five partitions of the tree. We find strong signal for rate heterogeneity among the five subdivisions for all five characters, but little overall impact of the choice of model on reconstructed ancestral states, which indicates that most of our inferred ancestral states are the same whether heterogeneity is accounted for or not.
NASA Astrophysics Data System (ADS)
Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.
2017-03-01
Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.
A novel description of FDG excretion in the renal system: application to metformin-treated models
NASA Astrophysics Data System (ADS)
Garbarino, S.; Caviglia, G.; Sambuceti, G.; Benvenuto, F.; Piana, M.
2014-05-01
This paper introduces a novel compartmental model describing the excretion of 18F-fluoro-deoxyglucose (FDG) in the renal system and a numerical method based on the maximum likelihood for its reduction. This approach accounts for variations in FDG concentration due to water re-absorption in renal tubules and the increase of the bladder’s volume during the FDG excretion process. From the computational viewpoint, the reconstruction of the tracer kinetic parameters is obtained by solving the maximum likelihood problem iteratively, using a non-stationary, steepest descent approach that explicitly accounts for the Poisson nature of nuclear medicine data. The reliability of the method is validated against two sets of synthetic data realized according to realistic conditions. Finally we applied this model to describe FDG excretion in the case of animal models treated with metformin. In particular we show that our approach allows the quantitative estimation of the reduction of FDG de-phosphorylation induced by metformin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidtlein, CR; Beattie, B; Humm, J
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
Variation in the Utilization of Reconstruction Following Mastectomy in Elderly Women
In, Haejin; Jiang, Wei; Lipsitz, Stuart R.; Neville, Bridget A.; Weeks, Jane C.; Greenberg, Caprice C.
2014-01-01
Background Regardless of their age, women who choose to undergo postmastectomy reconstruction report improved quality of life as a result. However, actual use of reconstruction decreases with increasing age. Whereas this may reflect patient preference and clinical factors, it may also represent age-based disparity. Methods Women aged 65 years or older who underwent mastectomy for DCIS/stage I/II breast cancer (2000–2005) were identified in the SEER-Medicare database. Overall and institutional rates of reconstruction were calculated. Characteristics of hospitals with higher and lower rates of reconstruction were compared. Pseudo-R2 statistics utilizing a patient-level logistic regression model estimated the relative contribution of institution and patient characteristics. Results A total of 19,234 patients at 716 institutions were examined. Overall, 6 % of elderly patients received reconstruction after mastectomy. Institutional rates ranged from zero to >40 %. Whereas 53 % of institutions performed no reconstruction on elderly patients, 5.6 % performed reconstructions on more than 20 %. Although patient characteristics (%ΔR2 = 70 %), and especially age (%ΔR2 = 34 %), were the primary determinants of reconstruction, institutional characteristics also explained some of the variation (%ΔR2 = 16 %). This suggests that in addition to appropriate factors, including clinical characteristics and patient preferences, the use of reconstruction among older women also is influenced by the institution at which they receive care. Conclusions Variation in the likelihood of reconstruction by institution and the association with structural characteristics suggests unequal access to this critical component of breast cancer care. Increased awareness of a potential age disparity is an important first step to improve access for elderly women who are candidates and desire reconstruction. PMID:23263733
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.
Evolutionary signals of symbiotic persistence in the legume–rhizobia mutualism
Werner, Gijsbert D. A.; Cornwell, William K.; Cornelissen, Johannes H. C.; Kiers, E. Toby
2015-01-01
Understanding the origins and evolutionary trajectories of symbiotic partnerships remains a major challenge. Why are some symbioses lost over evolutionary time whereas others become crucial for survival? Here, we use a quantitative trait reconstruction method to characterize different evolutionary stages in the ancient symbiosis between legumes (Fabaceae) and nitrogen-fixing bacteria, asking how labile is symbiosis across different host clades. We find that more than half of the 1,195 extant nodulating legumes analyzed have a high likelihood (>95%) of being in a state of high symbiotic persistence, meaning that they show a continued capacity to form the symbiosis over evolutionary time, even though the partnership has remained facultative and is not obligate. To explore patterns associated with the likelihood of loss and retention of the N2-fixing symbiosis, we tested for correlations between symbiotic persistence and legume distribution, climate, soil and trait data. We found a strong latitudinal effect and demonstrated that low mean annual temperatures are associated with high symbiotic persistence in legumes. Although no significant correlations between soil variables and symbiotic persistence were found, nitrogen and phosphorus leaf contents were positively correlated with legumes in a state of high symbiotic persistence. This pattern suggests that highly demanding nutrient lifestyles are associated with more stable partnerships, potentially because they “lock” the hosts into symbiotic dependency. Quantitative reconstruction methods are emerging as a powerful comparative tool to study broad patterns of symbiont loss and retention across diverse partnerships. PMID:26041807
Evolutionary signals of symbiotic persistence in the legume-rhizobia mutualism.
Werner, Gijsbert D A; Cornwell, William K; Cornelissen, Johannes H C; Kiers, E Toby
2015-08-18
Understanding the origins and evolutionary trajectories of symbiotic partnerships remains a major challenge. Why are some symbioses lost over evolutionary time whereas others become crucial for survival? Here, we use a quantitative trait reconstruction method to characterize different evolutionary stages in the ancient symbiosis between legumes (Fabaceae) and nitrogen-fixing bacteria, asking how labile is symbiosis across different host clades. We find that more than half of the 1,195 extant nodulating legumes analyzed have a high likelihood (>95%) of being in a state of high symbiotic persistence, meaning that they show a continued capacity to form the symbiosis over evolutionary time, even though the partnership has remained facultative and is not obligate. To explore patterns associated with the likelihood of loss and retention of the N2-fixing symbiosis, we tested for correlations between symbiotic persistence and legume distribution, climate, soil and trait data. We found a strong latitudinal effect and demonstrated that low mean annual temperatures are associated with high symbiotic persistence in legumes. Although no significant correlations between soil variables and symbiotic persistence were found, nitrogen and phosphorus leaf contents were positively correlated with legumes in a state of high symbiotic persistence. This pattern suggests that highly demanding nutrient lifestyles are associated with more stable partnerships, potentially because they "lock" the hosts into symbiotic dependency. Quantitative reconstruction methods are emerging as a powerful comparative tool to study broad patterns of symbiont loss and retention across diverse partnerships.
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
González, Edgar J; Martorell, Carlos
2013-07-01
Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems.
González, Edgar J; Martorell, Carlos
2013-01-01
Frequently, vital rates are driven by directional, long-term environmental changes. Many of these are of great importance, such as land degradation, climate change, and succession. Traditional demographic methods assume a constant or stationary environment, and thus are inappropriate to analyze populations subject to these changes. They also require repeat surveys of the individuals as change unfolds. Methods for reconstructing such lengthy processes are needed. We present a model that, based on a time series of population size structures and densities, reconstructs the impact of directional environmental changes on vital rates. The model uses integral projection models and maximum likelihood to identify the rates that best reconstructs the time series. The procedure was validated with artificial and real data. The former involved simulated species with widely different demographic behaviors. The latter used a chronosequence of populations of an endangered cactus subject to increasing anthropogenic disturbance. In our simulations, the vital rates and their change were always reconstructed accurately. Nevertheless, the model frequently produced alternative results. The use of coarse knowledge of the species' biology (whether vital rates increase or decrease with size or their plausible values) allowed the correct rates to be identified with a 90% success rate. With real data, the model correctly reconstructed the effects of disturbance on vital rates. These effects were previously known from two populations for which demographic data were available. Our procedure seems robust, as the data violated several of the model's assumptions. Thus, time series of size structures and densities contain the necessary information to reconstruct changing vital rates. However, additional biological knowledge may be required to provide reliable results. Because time series of size structures and densities are available for many species or can be rapidly generated, our model can contribute to understand populations that face highly pressing environmental problems. PMID:23919169
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
Under-reported data analysis with INAR-hidden Markov chains.
Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David
2016-11-20
In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
Towards a novel look on low-frequency climate reconstructions
NASA Astrophysics Data System (ADS)
Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti
2010-05-01
Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.
NASA Astrophysics Data System (ADS)
Lee, Taewoong; Lee, Hyounggun; Lee, Wonho
2015-10-01
This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne
2012-03-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
NASA Astrophysics Data System (ADS)
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
Automated Reconstruction of Neural Trees Using Front Re-initialization
Mukherjee, Amit; Stepanyants, Armen
2013-01-01
This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539
Simulated maximum likelihood method for estimating kinetic rates in gene expression.
Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin
2007-01-01
Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.
NASA Astrophysics Data System (ADS)
Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.
2008-06-01
This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.
Sheehan, Joanne; Sherman, Kerry A; Lam, Thomas; Boyages, John
2007-04-01
Little is known of the psychosocial factors associated with decision regret in the context of breast reconstruction following mastectomy for breast cancer treatment. Moreover, there is a paucity of theoretically-based research in the area of post-decision regret. Adopting the theoretical framework of the Monitoring Process Model (Cancer 1995;76(1):167-177), the current study assessed the role of information satisfaction, current psychological distress and the moderating effect of monitoring coping style to the experience of regret over the decision to undergo reconstructive surgery. Women (N=123) diagnosed with breast cancer who had undergone immediate or delayed breast reconstruction following mastectomy participated in the study. The majority of participants (52.8%, n=65) experienced no decision regret, 27.6% experienced mild regret and 19.5% moderate to strong regret. Bivariate analyses indicated that decision regret was associated with low satisfaction with preparatory information, depression, anxiety and stress. Multinominal logistic regression analysis showed, controlling for mood state and time since last reconstructive procedure, that lower satisfaction with information and increased depression were associated with increased likelihood of experiencing regret. Monitoring coping style moderated the association between anxiety and regret (beta=-0.10, OR=0.91, p=0.01), whereby low monitors who were highly anxious had a greater likelihood of experiencing regret than highly anxious high monitors. Copyright (c) 2006 John Wiley & Sons, Ltd.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Lu, Xin; Zhou, Haijian; Du, Xiaoli; Liu, Sha; Xu, Jialiang; Cui, Zhigang; Pang, Bo; Kan, Biao
2016-11-01
Vibrio parahaemolyticus is a common seafood-borne pathogenic bacterium which causes gastroenteritis in humans. Continuous surveillance on the molecular characters of the clinical and environmental V. parahaemolyticus strains needs to be conducted for the epidemiological and genetic purposes. To generate a picture of the population distribution of V. parahaemolyticus in eastern China isolated from clinical cases of gastroenteritis and environmental samples, we investigated the genetic and evolutionary relationships of the strains using the commonly used multi-locus sequence typing (MLST, in which seven house-keeping genes are used in the protocol). A highly genetic diversity within the V. parahaemolyticus population was observed but ST3 was still dominant in the clinical strains, and 103 new sequence types (ST) were found in the clinical strains by searching in the global V. parahaemolyticus MLST database. With these genetically diverse strains, we estimated the recombination rates of the loci in MLST analysis. The locus recA was found to be subject to exceptionally high rate of recombination, and the recombinant single nucleotide polymorphisms (SNPs) were also identified within the seven loci. The phylogenetic tree of the strains was re-constructed using the maximum likelihood method by removing the recombination SNPs of the seven loci, and the minimum spanning tree was re-constructed with the six loci without recA. Some changes were observed in comparison with the previously used methods, suggesting that the homologous recombination has roles in shaping the clonal structure of V. parahaemolyticus. We propose the recombination-free SNPs strategy in the clonality analysis of V. parahaemolyticus, especially when using the maximum likelihood method. Copyright © 2016. Published by Elsevier B.V.
Parallelizable 3D statistical reconstruction for C-arm tomosynthesis system
NASA Astrophysics Data System (ADS)
Wang, Beilei; Barner, Kenneth; Lee, Denny
2005-04-01
Clinical diagnosis and security detection tasks increasingly require 3D information which is difficult or impossible to obtain from 2D (two dimensional) radiographs. As a 3D (three dimensional) radiographic and non-destructive imaging technique, digital tomosynthesis is especially fit for cases where 3D information is required while a complete projection data is not available. Nowadays, FBP (filtered back projection) is extensively used in industry for its fast speed and simplicity. However, it is hard to deal with situations where only a limited number of projections from constrained directions are available, or the SNR (signal to noises ratio) of the projections is low. In order to deal with noise and take into account a priori information of the object, a statistical image reconstruction method is described based on the acquisition model of X-ray projections. We formulate a ML (maximum likelihood) function for this model and develop an ordered-subsets iterative algorithm to estimate the unknown attenuation of the object. Simulations show that satisfied results can be obtained after 1 to 2 iterations, and after that there is no significant improvement of the image quality. An adaptive wiener filter is also applied to the reconstructed image to remove its noise. Some approximations to speed up the reconstruction computation are also considered. Applying this method to computer generated projections of a revised Shepp phantom and true projections from diagnostic radiographs of a patient"s hand and mammography images yields reconstructions with impressive quality. Parallel programming is also implemented and tested. The quality of the reconstructed object is conserved, while the computation time is considerably reduced by almost the number of threads used.
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
Ibaraki, Masanobu; Sato, Kaoru; Mizuta, Tetsuro; Kitamura, Keishi; Miura, Shuichi; Sugawara, Shigeki; Shinohara, Yuki; Kinoshita, Toshibumi
2009-09-01
A modified version of row-action maximum likelihood algorithm (RAMLA) using a 'subset-dependent' relaxation parameter for noise suppression, or dynamic RAMLA (DRAMA), has been proposed. The aim of this study was to assess the capability of DRAMA reconstruction for quantitative (15)O brain positron emission tomography (PET). Seventeen healthy volunteers were studied using a 3D PET scanner. The PET study included 3 sequential PET scans for C(15)O, (15)O(2) and H (2) (15) O. First, the number of main iterations (N (it)) in DRAMA was optimized in relation to image convergence and statistical image noise. To estimate the statistical variance of reconstructed images on a pixel-by-pixel basis, a sinogram bootstrap method was applied using list-mode PET data. Once the optimal N (it) was determined, statistical image noise and quantitative parameters, i.e., cerebral blood flow (CBF), cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO(2)) and oxygen extraction fraction (OEF) were compared between DRAMA and conventional FBP. DRAMA images were post-filtered so that their spatial resolutions were matched with FBP images with a 6-mm FWHM Gaussian filter. Based on the count recovery data, N (it) = 3 was determined as an optimal parameter for (15)O PET data. The sinogram bootstrap analysis revealed that DRAMA reconstruction resulted in less statistical noise, especially in a low-activity region compared to FBP. Agreement of quantitative values between FBP and DRAMA was excellent. For DRAMA images, average gray matter values of CBF, CBV, CMRO(2) and OEF were 46.1 +/- 4.5 (mL/100 mL/min), 3.35 +/- 0.40 (mL/100 mL), 3.42 +/- 0.35 (mL/100 mL/min) and 42.1 +/- 3.8 (%), respectively. These values were comparable to corresponding values with FBP images: 46.6 +/- 4.6 (mL/100 mL/min), 3.34 +/- 0.39 (mL/100 mL), 3.48 +/- 0.34 (mL/100 mL/min) and 42.4 +/- 3.8 (%), respectively. DRAMA reconstruction is applicable to quantitative (15)O PET study and is superior to conventional FBP in terms of image quality.
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You
2017-12-01
The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Penny R.; Freedman, Gary; Nicolaou, Nicos
Purpose: The purpose of this study was to evaluate the likelihood of complications and cosmetic results among breast cancer patients who underwent modified radical mastectomy (MRM) and breast reconstruction followed by radiation therapy (RT) to either a temporary tissue expander (TTE) or permanent breast implant (PI). Methods and Materials: Records were reviewed of 74 patients with breast cancer who underwent MRM followed by breast reconstruction and RT. Reconstruction consisted of a TTE usually followed by exchange to a PI. RT was delivered to the TTE in 62 patients and to the PI in 12 patients. Dose to the reconstructed chestmore » wall was 50 Gy. Median follow-up was 48 months. The primary end point was the incidence of complications involving the reconstruction. Results: There was no significant difference in the rate of major complications in the PI group (0%) vs. 4.8% in the TTE group. No patients lost the reconstruction in the PI group. Three patients lost the reconstruction in the TTE group. There were excellent/good cosmetic scores in 90% of the TTE group and 80% of the PI group (p = 0.22). On multivariate regression models, the type of reconstruction irradiated had no statistically significant impact on complication rates. Conclusions: Patients treated with breast reconstruction and RT can experience low rates of major complications. We demonstrate no significant difference in the overall rate of major or minor complications between the TTE and PI groups. Postmastectomy RT to either the TTE or the PI should be considered as acceptable treatment options in all eligible patients.« less
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kambeitz, Manuel
This thesis presents an analysis of excited states of B0, B+ and B0 s mesons, decaying to B mesons while emitting a pion or kaon. They are reconstructed from their decay products and a selection is performed to discard wrongly reconstructed B(s) mesons with the multivariate analysis software NeuroBayes, as described in chapter 5. In the training process, the sPlot method and measured and simulated data are used. Chapter 6 describes how the properties of excited B(s) are determined by an unbinned maximum likelihood t to their mass spectra. The systematic uncertainties determined in this analysis are described in chaptermore » 7. The results of this thesis are presented in chapter 8 and a conclusion is given in chapter 9. The results shown in this thesis have been published before in [1].« less
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
Reconstruction of far-field tsunami amplitude distributions from earthquake sources
Geist, Eric L.; Parsons, Thomas E.
2016-01-01
The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.
Scott, Brandon L; Hoppe, Adam D
2016-01-01
Fluorescence resonance energy transfer (FRET) microscopy is a powerful tool for imaging the interactions between fluorescently tagged proteins in two-dimensions. For FRET microscopy to reach its full potential, it must be able to image more than one pair of interacting molecules and image degradation from out-of-focus light must be reduced. Here we extend our previous work on the application of maximum likelihood methods to the 3-dimensional reconstruction of 3-way FRET interactions within cells. We validated the new method (3D-3Way FRET) by simulation and fluorescent protein test constructs expressed in cells. In addition, we improved the computational methods to create a 2-log reduction in computation time over our previous method (3DFSR). We applied 3D-3Way FRET to image the 3D subcellular distributions of HIV Gag assembly. Gag fused to three different FPs (CFP, YFP, and RFP), assembled into viral-like particles and created punctate FRET signals that become visible on the cell surface when 3D-3Way FRET was applied to the data. Control experiments in which YFP-Gag, RFP-Gag and free CFP were expressed, demonstrated localized FRET between YFP and RFP at sites of viral assembly that were not associated with CFP. 3D-3Way FRET provides the first approach for quantifying multiple FRET interactions while improving the 3D resolution of FRET microscopy data without introducing bias into the reconstructed estimates. This method should allow improvement of widefield, confocal and superresolution FRET microscopy data.
Joint amalgamation of most parsimonious reconciled gene trees
Scornavacca, Celine; Jacox, Edwin; Szöllősi, Gergely J.
2015-01-01
Motivation: Traditionally, gene phylogenies have been reconstructed solely on the basis of molecular sequences; this, however, often does not provide enough information to distinguish between statistically equivalent relationships. To address this problem, several recent methods have incorporated information on the species phylogeny in gene tree reconstruction, leading to dramatic improvements in accuracy. Although probabilistic methods are able to estimate all model parameters but are computationally expensive, parsimony methods—generally computationally more efficient—require a prior estimate of parameters and of the statistical support. Results: Here, we present the Tree Estimation using Reconciliation (TERA) algorithm, a parsimony based, species tree aware method for gene tree reconstruction based on a scoring scheme combining duplication, transfer and loss costs with an estimate of the sequence likelihood. TERA explores all reconciled gene trees that can be amalgamated from a sample of gene trees. Using a large scale simulated dataset, we demonstrate that TERA achieves the same accuracy as the corresponding probabilistic method while being faster, and outperforms other parsimony-based methods in both accuracy and speed. Running TERA on a set of 1099 homologous gene families from complete cyanobacterial genomes, we find that incorporating knowledge of the species tree results in a two thirds reduction in the number of apparent transfer events. Availability and implementation: The algorithm is implemented in our program TERA, which is freely available from http://mbb.univ-montp2.fr/MBB/download_sources/16__TERA. Contact: celine.scornavacca@univ-montp2.fr, ssolo@angel.elte.hu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25380957
Direct Reconstruction of CT-Based Attenuation Correction Images for PET With Cluster-Based Penalties
NASA Astrophysics Data System (ADS)
Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Extremely low-dose (LD) CT acquisitions used for PET attenuation correction have high levels of noise and potential bias artifacts due to photon starvation. This paper explores the use of a priori knowledge for iterative image reconstruction of the CT-based attenuation map. We investigate a maximum a posteriori framework with cluster-based multinomial penalty for direct iterative coordinate decent (dICD) reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction used a Poisson log-likelihood data fit term and evaluated two image penalty terms of spatial and mixture distributions. The spatial regularization is based on a quadratic penalty. For the mixture penalty, we assumed that the attenuation map may consist of four material clusters: air + background, lung, soft tissue, and bone. Using simulated noisy sinogram data, dICD reconstruction was performed with different strengths of the spatial and mixture penalties. The combined spatial and mixture penalties reduced the root mean squared error (RMSE) by roughly two times compared with a weighted least square and filtered backprojection reconstruction of CT images. The combined spatial and mixture penalties resulted in only slightly lower RMSE compared with a spatial quadratic penalty alone. For direct PET attenuation map reconstruction from ultra-LD CT acquisitions, the combination of spatial and mixture penalties offers regularization of both variance and bias and is a potential method to reconstruct attenuation maps with negligible patient dose. The presented results, using a best-case histogram suggest that the mixture penalty does not offer a substantive benefit over conventional quadratic regularization and diminishes enthusiasm for exploring future application of the mixture penalty.
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
Partially incorrect fossil data augment analyses of discrete trait evolution in living species.
Puttick, Mark N
2016-08-01
Ancestral state reconstruction of discrete character traits is often vital when attempting to understand the origins and homology of traits in living species. The addition of fossils has been shown to alter our understanding of trait evolution in extant taxa, but researchers may avoid using fossils alongside extant species if only few are known, or if the designation of the trait of interest is uncertain. Here, I investigate the impacts of fossils and incorrectly coded fossils in the ancestral state reconstruction of discrete morphological characters under a likelihood model. Under simulated phylogenies and data, likelihood-based models are generally accurate when estimating ancestral node values. Analyses with combined fossil and extant data always outperform analyses with extant species alone, even when around one quarter of the fossil information is incorrect. These results are especially pronounced when model assumptions are violated, such as when there is a trend away from the root value. Fossil data are of particular importance when attempting to estimate the root node character state. Attempts should be made to include fossils in analysis of discrete traits under likelihood, even if there is uncertainty in the fossil trait data. © 2016 The Authors.
[3D bioprinting of cartilage: challenges concerning the reconstruction of a burned ear].
Visscher, Dafydd O; Bos, Ernst J; van Zuijlen, Paul P M
2015-01-01
Reconstruction of a severely maimed ear is a major challenge. The ear is highly flexible yet tough, and has a very complex three-dimensional shape. Reconstruction of a patient's burned ear is even more complex due to surrounding tissue damage. Not only does this hamper reconstruction options, it also increases the likelihood of issues when using synthetic implant materials. In such cases, rib cartilage is the preferred option, but this tissue has practical limitations too. For these reasons, tissue engineering and 3D bioprinting may have the potential to create personalized cartilage implants for burns patients. However, 3D bioprinting is a tool to facilitate the reconstruction, and not by itself the Holy Grail. The clinical application of this technique is still at a very early stage. Nevertheless, we expect that 3D bioprinting can be utilised for facial reconstruction following burns come 2020.
Bayesian framework for the evaluation of fiber evidence in a double murder--a case report.
Causin, Valerio; Schiavone, Sergio; Marigo, Antonio; Carresi, Pietro
2004-05-10
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-01-01
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-05-19
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.
NASA Astrophysics Data System (ADS)
Bovy Jo; Hogg, David W.; Roweis, Sam T.
2011-06-01
We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.
Pedigree reconstruction from SNP data: parentage assignment, sibship clustering and beyond.
Huisman, Jisca
2017-09-01
Data on hundreds or thousands of single nucleotide polymorphisms (SNPs) provide detailed information about the relationships between individuals, but currently few tools can turn this information into a multigenerational pedigree. I present the r package sequoia, which assigns parents, clusters half-siblings sharing an unsampled parent and assigns grandparents to half-sibships. Assignments are made after consideration of the likelihoods of all possible first-, second- and third-degree relationships between the focal individuals, as well as the traditional alternative of being unrelated. This careful exploration of the local likelihood surface is implemented in a fast, heuristic hill-climbing algorithm. Distinction between the various categories of second-degree relatives is possible when likelihoods are calculated conditional on at least one parent of each focal individual. Performance was tested on simulated data sets with realistic genotyping error rate and missingness, based on three different large pedigrees (N = 1000-2000). This included a complex pedigree with overlapping generations, occasional close inbreeding and some unknown birth years. Parentage assignment was highly accurate down to about 100 independent SNPs (error rate <0.1%) and fast (<1 min) as most pairs can be excluded from being parent-offspring based on opposite homozygosity. For full pedigree reconstruction, 40% of parents were assumed nongenotyped. Reconstruction resulted in low error rates (<0.3%), high assignment rates (>99%) in limited computation time (typically <1 h) when at least 200 independent SNPs were used. In three empirical data sets, relatedness estimated from the inferred pedigree was strongly correlated to genomic relatedness. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Licona-Vera, Yuyini; Ornelas, Juan Francisco
2017-06-05
Geographical and temporal patterns of diversification in bee hummingbirds (Mellisugini) were assessed with respect to the evolution of migration, critical for colonization of North America. We generated a dated multilocus phylogeny of the Mellisugini based on a dense sampling using Bayesian inference, maximum-likelihood and maximum parsimony methods, and reconstructed the ancestral states of distributional areas in a Bayesian framework and migratory behavior using maximum parsimony, maximum-likelihood and re-rooting methods. All phylogenetic analyses confirmed monophyly of the Mellisugini and the inclusion of Atthis, Calothorax, Doricha, Eulidia, Mellisuga, Microstilbon, Myrmia, Tilmatura, and Thaumastura. Mellisugini consists of two clades: (1) South American species (including Tilmatura dupontii), and (2) species distributed in North and Central America and the Caribbean islands. The second clade consists of four subclades: Mexican (Calothorax, Doricha) and Caribbean (Archilochus, Calliphlox, Mellisuga) sheartails, Calypte, and Selasphorus (incl. Atthis). Coalescent-based dating places the origin of the Mellisugini in the mid-to-late Miocene, with crown ages of most subclades in the early Pliocene, and subsequent species splits in the Pleistocene. Bee hummingbirds reached western North America by the end of the Miocene and the ancestral mellisuginid (bee hummingbirds) was reconstructed as sedentary, with four independent gains of migratory behavior during the evolution of the Mellisugini. Early colonization of North America and subsequent evolution of migration best explained biogeographic and diversification patterns within the Mellisugini. The repeated evolution of long-distance migration by different lineages was critical for the colonization of North America, contributing to the radiation of bee hummingbirds. Comparative phylogeography is needed to test whether the repeated evolution of migration resulted from northward expansion of southern sedentary populations.
GPSit: An automated method for evolutionary analysis of nonculturable ciliated microeukaryotes.
Chen, Xiao; Wang, Yurui; Sheng, Yalan; Warren, Alan; Gao, Shan
2018-05-01
Microeukaryotes are among the most important components of the microbial food web in almost all aquatic and terrestrial ecosystems worldwide. In order to gain a better understanding their roles and functions in ecosystems, sequencing coupled with phylogenomic analyses of entire genomes or transcriptomes is increasingly used to reconstruct the evolutionary history and classification of these microeukaryotes and thus provide a more robust framework for determining their systematics and diversity. More importantly, phylogenomic research usually requires high levels of hands-on bioinformatics experience. Here, we propose an efficient automated method, "Guided Phylogenomic Search in trees" (GPSit), which starts from predicted protein sequences of newly sequenced species and a well-defined customized orthologous database. Compared with previous protocols, our method streamlines the entire workflow by integrating all essential and other optional operations. In so doing, the manual operation time for reconstructing phylogenetic relationships is reduced from days to several hours, compared to other methods. Furthermore, GPSit supports user-defined parameters in most steps and thus allows users to adapt it to their studies. The effectiveness of GPSit is demonstrated by incorporating available online data and new single-cell data of three nonculturable marine ciliates (Anteholosticha monilata, Deviata sp. and Diophrys scutum) under moderate sequencing coverage (~5×). Our results indicate that the former could reconstruct robust "deep" phylogenetic relationships while the latter reveals the presence of intermediate taxa in shallow relationships. Based on empirical phylogenomic data, we also used GPSit to evaluate the impact of different levels of missing data on two commonly used methods of phylogenetic analyses, maximum likelihood (ML) and Bayesian inference (BI) methods. We found that BI is less sensitive to missing data when fast-evolving sites are removed. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-01
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack—a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of {226} MeV u-1. The beam image in the phantom is reconstructed from a set of nine discrete detector positions between {-80}^\\circ and {50}^\\circ from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of {2.85} cm and density differences down to {0.3} g cm-3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
Reinhart, Anna Merle; Spindeldreier, Claudia Katharina; Jakubek, Jan; Martišíková, Mária
2017-06-21
Carbon ion beam radiotherapy enables a very localised dose deposition. However, even small changes in the patient geometry or positioning errors can significantly distort the dose distribution. A live, non-invasive monitoring system of the beam delivery within the patient is therefore highly desirable, and could improve patient treatment. We present a novel three-dimensional method for imaging the beam in the irradiated object, exploiting the measured tracks of single secondary ions emerging under irradiation. The secondary particle tracks are detected with a TimePix stack-a set of parallel pixelated semiconductor detectors. We developed a three-dimensional reconstruction algorithm based on maximum likelihood expectation maximization. We demonstrate the applicability of the new method in the irradiation of a cylindrical PMMA phantom of human head size with a carbon ion pencil beam of [Formula: see text] MeV u -1 . The beam image in the phantom is reconstructed from a set of nine discrete detector positions between [Formula: see text] and [Formula: see text] from the beam axis. Furthermore, we demonstrate the potential to visualize inhomogeneities by irradiating a PMMA phantom with an air gap as well as bone and adipose tissue surrogate inserts. We successfully reconstructed a three-dimensional image of the treatment beam in the phantom from single secondary ion tracks. The beam image corresponds well to the beam direction and energy. In addition, cylindrical inhomogeneities with a diameter of [Formula: see text] cm and density differences down to [Formula: see text] g cm -3 to the surrounding material are clearly visualized. This novel three-dimensional method to image a therapeutic carbon ion beam in the irradiated object does not interfere with the treatment and requires knowledge only of single secondary ion tracks. Even with detectors with only a small angular coverage, the three-dimensional reconstruction of the fragmentation points presented in this work was found to be feasible.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-01-01
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions. PMID:27598164
Weak and Dynamic GNSS Signal Tracking Strategies for Flight Missions in the Space Service Volume.
Jing, Shuai; Zhan, Xingqun; Liu, Baoyu; Chen, Maolin
2016-09-02
Weak-signal and high-dynamics are of two primary concerns of space navigation using GNSS (Global Navigation Satellite System) in the space service volume (SSV). The paper firstly defines a reference assumption third-order phase-locked loop (PLL) as the baseline of an onboard GNSS receiver, and proves the incompetence of this conventional architecture. Then an adaptive four-state Kalman filter (KF)-based algorithm is introduced to realize the optimization of loop noise bandwidth, which can adaptively regulate its filter gain according to the received signal power and line-of-sight (LOS) dynamics. To overcome the matter of losing lock in weak-signal and high-dynamic environments, an open loop tracking strategy aided by an inertial navigation system (INS) is recommended, and the traditional maximum likelihood estimation (MLE) method is modified in a non-coherent way by reconstructing the likelihood cost function. Furthermore, a typical mission with combined orbital maneuvering and non-maneuvering arcs is taken as a destination object to test the two proposed strategies. Finally, the experiment based on computer simulation identifies the effectiveness of an adaptive four-state KF-based strategy under non-maneuvering conditions and the virtue of INS-assisted methods under maneuvering conditions.
Bayesian Abel Inversion in Quantitative X-Ray Radiography
Howard, Marylesa; Fowler, Michael; Luttman, Aaron; ...
2016-05-19
A common image formation process in high-energy X-ray radiography is to have a pulsed power source that emits X-rays through a scene, a scintillator that absorbs X-rays and uoresces in the visible spectrum in response to the absorbed photons, and a CCD camera that images the visible light emitted from the scintillator. The intensity image is related to areal density, and, for an object that is radially symmetric about a central axis, the Abel transform then gives the object's volumetric density. Two of the primary drawbacks to classical variational methods for Abel inversion are their sensitivity to the type andmore » scale of regularization chosen and the lack of natural methods for quantifying the uncertainties associated with the reconstructions. In this work we cast the Abel inversion problem within a statistical framework in order to compute volumetric object densities from X-ray radiographs and to quantify uncertainties in the reconstruction. A hierarchical Bayesian model is developed with a likelihood based on a Gaussian noise model and with priors placed on the unknown density pro le, the data precision matrix, and two scale parameters. This allows the data to drive the localization of features in the reconstruction and results in a joint posterior distribution for the unknown density pro le, the prior parameters, and the spatial structure of the precision matrix. Results of the density reconstructions and pointwise uncertainty estimates are presented for both synthetic signals and real data from a U.S. Department of Energy X-ray imaging facility.« less
NASA Astrophysics Data System (ADS)
Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping
2016-01-01
The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.
Csuros, Miklos; Rogozin, Igor B.; Koonin, Eugene V.
2011-01-01
Protein-coding genes in eukaryotes are interrupted by introns, but intron densities widely differ between eukaryotic lineages. Vertebrates, some invertebrates and green plants have intron-rich genes, with 6–7 introns per kilobase of coding sequence, whereas most of the other eukaryotes have intron-poor genes. We reconstructed the history of intron gain and loss using a probabilistic Markov model (Markov Chain Monte Carlo, MCMC) on 245 orthologous genes from 99 genomes representing the three of the five supergroups of eukaryotes for which multiple genome sequences are available. Intron-rich ancestors are confidently reconstructed for each major group, with 53 to 74% of the human intron density inferred with 95% confidence for the Last Eukaryotic Common Ancestor (LECA). The results of the MCMC reconstruction are compared with the reconstructions obtained using Maximum Likelihood (ML) and Dollo parsimony methods. An excellent agreement between the MCMC and ML inferences is demonstrated whereas Dollo parsimony introduces a noticeable bias in the estimations, typically yielding lower ancestral intron densities than MCMC and ML. Evolution of eukaryotic genes was dominated by intron loss, with substantial gain only at the bases of several major branches including plants and animals. The highest intron density, 120 to 130% of the human value, is inferred for the last common ancestor of animals. The reconstruction shows that the entire line of descent from LECA to mammals was intron-rich, a state conducive to the evolution of alternative splicing. PMID:21935348
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
Input-output mapping reconstruction of spike trains at dorsal horn evoked by manual acupuncture
NASA Astrophysics Data System (ADS)
Wei, Xile; Shi, Dingtian; Yu, Haitao; Deng, Bin; Lu, Meili; Han, Chunxiao; Wang, Jiang
2016-12-01
In this study, a generalized linear model (GLM) is used to reconstruct mapping from acupuncture stimulation to spike trains driven by action potential data. The electrical signals are recorded in spinal dorsal horn after manual acupuncture (MA) manipulations with different frequencies being taken at the “Zusanli” point of experiment rats. Maximum-likelihood method is adopted to estimate the parameters of GLM and the quantified value of assumed model input. Through validating the accuracy of firings generated from the established GLM, it is found that the input-output mapping of spike trains evoked by acupuncture can be successfully reconstructed for different frequencies. Furthermore, via comparing the performance of several GLMs based on distinct inputs, it suggests that input with the form of half-sine with noise can well describe the generator potential induced by acupuncture mechanical action. Particularly, the comparison of reproducing the experiment spikes for five selected inputs is in accordance with the phenomenon found in Hudgkin-Huxley (H-H) model simulation, which indicates the mapping from half-sine with noise input to experiment spikes meets the real encoding scheme to some extent. These studies provide us a new insight into coding processes and information transfer of acupuncture.
Gaussianization for fast and accurate inference from cosmological data
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2016-06-01
We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Is multiple-sequence alignment required for accurate inference of phylogeny?
Höhl, Michael; Ragan, Mark A
2007-04-01
The process of inferring phylogenetic trees from molecular sequences almost always starts with a multiple alignment of these sequences but can also be based on methods that do not involve multiple sequence alignment. Very little is known about the accuracy with which such alignment-free methods recover the correct phylogeny or about the potential for increasing their accuracy. We conducted a large-scale comparison of ten alignment-free methods, among them one new approach that does not calculate distances and a faster variant of our pattern-based approach; all distance-based alignment-free methods are freely available from http://www.bioinformatics.org.au (as Python package decaf+py). We show that most methods exhibit a higher overall reconstruction accuracy in the presence of high among-site rate variation. Under all conditions that we considered, variants of the pattern-based approach were significantly better than the other alignment-free methods. The new pattern-based variant achieved a speed-up of an order of magnitude in the distance calculation step, accompanied by a small loss of tree reconstruction accuracy. A method of Bayesian inference from k-mers did not improve on classical alignment-free (and distance-based) methods but may still offer other advantages due to its Bayesian nature. We found the optimal word length k of word-based methods to be stable across various data sets, and we provide parameter ranges for two different alphabets. The influence of these alphabets was analyzed to reveal a trade-off in reconstruction accuracy between long and short branches. We have mapped the phylogenetic accuracy for many alignment-free methods, among them several recently introduced ones, and increased our understanding of their behavior in response to biologically important parameters. In all experiments, the pattern-based approach emerged as superior, at the expense of higher resource consumption. Nonetheless, no alignment-free method that we examined recovers the correct phylogeny as accurately as does an approach based on maximum-likelihood distance estimates of multiply aligned sequences.
Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.
Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos
2011-07-01
In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.
Fast simulation of reconstructed phylogenies under global time-dependent birth-death processes.
Höhna, Sebastian
2013-06-01
Diversification rates and patterns may be inferred from reconstructed phylogenies. Both the time-dependent and the diversity-dependent birth-death process can produce the same observed patterns of diversity over time. To develop and test new models describing the macro-evolutionary process of diversification, generic and fast algorithms to simulate under these models are necessary. Simulations are not only important for testing and developing models but play an influential role in the assessment of model fit. In the present article, I consider as the model a global time-dependent birth-death process where each species has the same rates but rates may vary over time. For this model, I derive the likelihood of the speciation times from a reconstructed phylogenetic tree and show that each speciation event is independent and identically distributed. This fact can be used to simulate efficiently reconstructed phylogenetic trees when conditioning on the number of species, the time of the process or both. I show the usability of the simulation by approximating the posterior predictive distribution of a birth-death process with decreasing diversification rates applied on a published bird phylogeny (family Cettiidae). The methods described in this manuscript are implemented in the R package TESS, available from the repository CRAN (http://cran.r-project.org/web/packages/TESS/). Supplementary data are available at Bioinformatics online.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
NASA Astrophysics Data System (ADS)
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
An improved image non-blind image deblurring method based on FoEs
NASA Astrophysics Data System (ADS)
Zhu, Qidan; Sun, Lei
2013-03-01
Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.
Barthe, Stéphanie; Binelli, Giorgio; Hérault, Bruno; Scotti-Saintagne, Caroline; Sabatier, Daniel; Scotti, Ivan
2017-02-01
How Quaternary climatic and geological disturbances influenced the composition of Neotropical forests is hotly debated. Rainfall and temperature changes during and/or immediately after the last glacial maximum (LGM) are thought to have strongly affected the geographical distribution and local abundance of tree species. The paucity of the fossil records in Neotropical forests prevents a direct reconstruction of such processes. To describe community-level historical trends in forest composition, we turned therefore to inferential methods based on the reconstruction of past demographic changes. In particular, we modelled the history of rainforests in the eastern Guiana Shield over a timescale of several thousand generations, through the application of approximate Bayesian computation and maximum-likelihood methods to diversity data at nuclear and chloroplast loci in eight species or subspecies of rainforest trees. Depending on the species and on the method applied, we detected population contraction, expansion or stability, with a general trend in favour of stability or expansion, with changes presumably having occurred during or after the LGM. These findings suggest that Guiana Shield rainforests have globally persisted, while expanding, through the Quaternary, but that different species have experienced different demographic events, with a trend towards the increase in frequency of light-demanding, disturbance-associated species. © 2016 John Wiley & Sons Ltd.
Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET
NASA Astrophysics Data System (ADS)
Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.
2018-06-01
A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
NASA Astrophysics Data System (ADS)
Xu, Xiaofei; Xing, Yuxiang; Wang, Sen; Zhang, Li
2018-06-01
X-ray liquid security inspection system plays an important role in homeland security, while the conventional dual-energy CT (DECT) system may have a big deviation in extracting the atomic number and the electron density of materials in various conditions. Photon counting detectors (PCDs) have the capability of discriminating the incident photons of different energy. The technique becomes more and more mature in nowadays. In this work, we explore the performance of a multi-energy CT imaging system with a PCD for liquid security inspection in material discrimination. We used a maximum-likelihood (ML) decomposition method with scatter correction based on a cross-energy response model (CERM) for PCDs so that to improve the accuracy of atomic number and electronic density imaging. Experimental study was carried to examine the effectiveness and robustness of the proposed system. Our results show that the concentration of different solutions in physical phantoms can be reconstructed accurately, which could improve the material identification compared to current available dual-energy liquid security inspection systems. The CERM-base decomposition and reconstruction method can be easily used to different applications such as medical diagnosis.
Volume-of-Change Cone-Beam CT for Image-Guided Surgery
Lee, Junghoon; Stayman, J. Webster; Otake, Yoshito; Schafer, Sebastian; Zbijewski, Wojciech; Khanna, A. Jay; Prince, Jerry L.; Siewerdsen, Jeffrey H.
2012-01-01
C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRR) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector (FPD). The VOCs were reconstructed from varying number of images (10–66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index, and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15–20 images were used, allowing dose reduction by the factor of 10–20. PMID:22801026
Volume-of-change cone-beam CT for image-guided surgery
NASA Astrophysics Data System (ADS)
Lee, Junghoon; Webster Stayman, J.; Otake, Yoshito; Schafer, Sebastian; Zbijewski, Wojciech; Khanna, A. Jay; Prince, Jerry L.; Siewerdsen, Jeffrey H.
2012-08-01
C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRRs) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector. The VOCs were reconstructed from a varying number of images (10-66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15-20 images were used, allowing dose reduction by the factor of 10-20.
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies
Rukhin, Andrew L.
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583
Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.
Rukhin, Andrew L
2011-01-01
A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.
NASA Astrophysics Data System (ADS)
Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.
2018-04-01
The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Ronco, Saverio
2006-01-01
This thesis reports the reconstruction and lifetime measurement of B +, Bmore » $$0/atop{d}$$ and B$$0/atop{s}$$ mesons, performed using fully reconstructed hadronic decays collected by a dedicated trigger at CDF II experiment. This dedicated trigger selects significantly displaced tracks from primary vertex of p$$\\bar{p}$$ collisions generated at Tevatron collider, obtaining, in this way, huge data samples enriched of long-lived particles, and is therefore suitable for reconstruction of B meson in hadronic decay modes. Due to the trigger track impact parameter selections, the proper decay time distributions of the B mesons no longer follow a simply exponential decay law. This complicates the lifetime measurement and requires a correct understanding and treatment of all the involved effects to keep systematic uncertainties under control. This thesis presents a method to extract the lifetime of B mesons in “ct- biased” samples, based on a Monte Carlo approach, to correct for the effects of the trigger and analysis selections. We present the results of this method when applied on fully re- constructed decays of B collected by CDF II in the data taking runs up to August 2004, corresponding to an integrated luminosity of about 360 pb -1. The lifetimes are extracted using the decay modes B + → $$\\bar{D}$$ 0π +,B$$0\\atop{d}$$ → D -π +, B$$0\\atop{d}$$ → D -π +π -π +, B$$0\\atop{s}$$ → D$$-\\atop{s}$$π + and B$$0\\atop{s}$$ → D$$-\\atop{s}$$ π +π -π +(and c.c.) and performing combined mass-lifetime unbinned maximum likelihood fits.« less
Nonlocal maximum likelihood estimation method for denoising multiple-coil magnetic resonance images.
Rajan, Jeny; Veraart, Jelle; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan
2012-12-01
Effective denoising is vital for proper analysis and accurate quantitative measurements from magnetic resonance (MR) images. Even though many methods were proposed to denoise MR images, only few deal with the estimation of true signal from MR images acquired with phased-array coils. If the magnitude data from phased array coils are reconstructed as the root sum of squares, in the absence of noise correlations and subsampling, the data is assumed to follow a non central-χ distribution. However, when the k-space is subsampled to increase the acquisition speed (as in GRAPPA like methods), noise becomes spatially varying. In this note, we propose a method to denoise multiple-coil acquired MR images. Both the non central-χ distribution and the spatially varying nature of the noise is taken into account in the proposed method. Experiments were conducted on both simulated and real data sets to validate and to demonstrate the effectiveness of the proposed method. Copyright © 2012 Elsevier Inc. All rights reserved.
PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†
NASA Astrophysics Data System (ADS)
Naghibzadeh, Shahrzad; van der Veen, Alle-Jan
2018-06-01
Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.
Hrbek, Tomas; Stölting, Kai N; Bardakci, Fevzi; Küçük, Fahrettin; Wildekamp, Rudolf H; Meyer, Axel
2004-07-01
We investigated the phylogenetic relationships of Pseudophoxinus (Cyprinidae: Leuciscinae) species from central Anatolia, Turkey to test the hypothesis of geographic speciation driven by early Pliocene orogenic events. We analyzed 1141 aligned base pairs of the complete cytochrome b mitochondrial gene. Phylogenetic relationships reconstructed by maximum likelihood, Bayesian likelihood, and maximum parsimony methods are identical, and generally well supported. Species and clades are restricted to geologically well-defined units, and are deeply divergent from each other. The basal diversification of central Anatolian Pseudophoxinus is estimated to have occurred approximately 15 million years ago. Our results are in agreement with a previous study of the Anatolian fish genus Aphanius that also shows a diversification pattern driven by the Pliocene orogenic events. The distribution of clades of Aphanius and Pseudophoxinus overlap, and areas of distribution comprise the same geological units. The geological history of Anatolia is likely to have had a major impact on the diversification history of many taxa occupying central Anatolia; many of these taxa are likely to be still unrecognized as distinct. Copyright 2004 Elsevier Inc.
Michailidis, George
2014-01-01
Reconstructing transcriptional regulatory networks is an important task in functional genomics. Data obtained from experiments that perturb genes by knockouts or RNA interference contain useful information for addressing this reconstruction problem. However, such data can be limited in size and/or are expensive to acquire. On the other hand, observational data of the organism in steady state (e.g., wild-type) are more readily available, but their informational content is inadequate for the task at hand. We develop a computational approach to appropriately utilize both data sources for estimating a regulatory network. The proposed approach is based on a three-step algorithm to estimate the underlying directed but cyclic network, that uses as input both perturbation screens and steady state gene expression data. In the first step, the algorithm determines causal orderings of the genes that are consistent with the perturbation data, by combining an exhaustive search method with a fast heuristic that in turn couples a Monte Carlo technique with a fast search algorithm. In the second step, for each obtained causal ordering, a regulatory network is estimated using a penalized likelihood based method, while in the third step a consensus network is constructed from the highest scored ones. Extensive computational experiments show that the algorithm performs well in reconstructing the underlying network and clearly outperforms competing approaches that rely only on a single data source. Further, it is established that the algorithm produces a consistent estimate of the regulatory network. PMID:24586224
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Likelihood of Tree Topologies with Fossils and Diversification Rate Estimation.
Didier, Gilles; Fau, Marine; Laurin, Michel
2017-11-01
Since the diversification process cannot be directly observed at the human scale, it has to be studied from the information available, namely the extant taxa and the fossil record. In this sense, phylogenetic trees including both extant taxa and fossils are the most complete representations of the diversification process that one can get. Such phylogenetic trees can be reconstructed from molecular and morphological data, to some extent. Among the temporal information of such phylogenetic trees, fossil ages are by far the most precisely known (divergence times are inferences calibrated mostly with fossils). We propose here a method to compute the likelihood of a phylogenetic tree with fossils in which the only considered time information is the fossil ages, and apply it to the estimation of the diversification rates from such data. Since it is required in our computation, we provide a method for determining the probability of a tree topology under the standard diversification model. Testing our approach on simulated data shows that the maximum likelihood rate estimates from the phylogenetic tree topology and the fossil dates are almost as accurate as those obtained by taking into account all the data, including the divergence times. Moreover, they are substantially more accurate than the estimates obtained only from the exact divergence times (without taking into account the fossil record). We also provide an empirical example composed of 50 Permo-Carboniferous eupelycosaur (early synapsid) taxa ranging in age from about 315 Ma (Late Carboniferous) to 270 Ma (shortly after the end of the Early Permian). Our analyses suggest a speciation (cladogenesis, or birth) rate of about 0.1 per lineage and per myr, a marginally lower extinction rate, and a considerable hidden paleobiodiversity of early synapsids. [Extinction rate; fossil ages; maximum likelihood estimation; speciation rate.]. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
West, R. Derek; Gunther, Jacob H.; Moon, Todd K.
2016-12-01
In this study, we derive a comprehensive forward model for the data collected by stripmap synthetic aperture radar (SAR) that is linear in the ground reflectivity parameters. It is also shown that if the noise model is additive, then the forward model fits into the linear statistical model framework, and the ground reflectivity parameters can be estimated by statistical methods. We derive the maximum likelihood (ML) estimates for the ground reflectivity parameters in the case of additive white Gaussian noise. Furthermore, we show that obtaining the ML estimates of the ground reflectivity requires two steps. The first step amounts tomore » a cross-correlation of the data with a model of the data acquisition parameters, and it is shown that this step has essentially the same processing as the so-called convolution back-projection algorithm. The second step is a complete system inversion that is capable of mitigating the sidelobes of the spatially variant impulse responses remaining after the correlation processing. We also state the Cramer-Rao lower bound (CRLB) for the ML ground reflectivity estimates.We show that the CRLB is linked to the SAR system parameters, the flight path of the SAR sensor, and the image reconstruction grid.We demonstrate the ML image formation and the CRLB bound for synthetically generated data.« less
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.
Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C
2016-10-01
The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.
NASA Astrophysics Data System (ADS)
Storm, Emma; Weniger, Christoph; Calore, Francesca
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
NASA Astrophysics Data System (ADS)
Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.
2009-02-01
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.
Shape reconstruction of irregular bodies with multiple complementary data sources
NASA Astrophysics Data System (ADS)
Kaasalainen, M.; Viikinkoski, M.
2012-07-01
We discuss inversion methods for shape reconstruction with complementary data sources. The current main sources are photometry, adaptive optics or other images, occultation timings, and interferometry, and the procedure can readily be extended to include range-Doppler radar and thermal infrared data as well. We introduce the octantoid, a generally applicable shape support that can be automatically used for surface types encountered in planetary research, including strongly nonconvex or non-starlike shapes. We present models of Kleopatra and Hermione from multimodal data as examples of this approach. An important concept in this approach is the optimal weighting of the various data modes. We define the maximum compatibility estimate, a multimodal generalization of the maximum likelihood estimate, for this purpose. We also present a specific version of the procedure for asteroid flyby missions, with which one can reconstruct the complete shape of the target by using the flyby-based map of a part of the surface together with other available data. Finally, we show that the relative volume error of a shape solution is usually approximately equal to the relative shape error rather than its multiple. Our algorithms are trivially parallelizable, so running the code on a CUDA-enabled graphics processing unit is some two orders of magnitude faster than the usual single-processor mode.
Respiratory motion correction in emission tomography image reconstruction.
Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques
2005-01-01
In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.
Advanced prior modeling for 3D bright field electron tomography
NASA Astrophysics Data System (ADS)
Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.
2015-03-01
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
Hwang, Donghwi; Kim, Kyeong Yun; Kang, Seung Kwan; Seo, Seongho; Paeng, Jin Chul; Lee, Dong Soo; Lee, Jae Sung
2018-02-15
Simultaneous reconstruction of activity and attenuation using the maximum likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight (TOF) information is a promising method for positron emission tomography (PET) attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET data set. Methods: We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction ( 18 F-FP-CIT PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder (CAE), U-net, hybrid of CAE and U-net) were designed and trained to learn x-ray computed tomography (CT) derived μ-map (μ-CT) from the MLAA-generated activity distribution and μ-map (μ-MLAA). PET/CT data of 40 patients with suspected Parkinson's disease were employed for five-fold cross-validation. For the training of CNNs, 800,000 transverse PET slices and CTs augmented from 32 patient data sets were used. The similarity to μ-CT of the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) and μ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and non-specific binding regions (cerebellum and occipital cortex) and the binding ratios in the striatum in the PET activity images reconstructed using those μ-maps. Results: The CNNs generated less noisy and more uniform μ-maps than original μ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The hybrid network of CAE and U-net yielded the most similar μ-maps to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only approximately 5% errors in activity and biding ratio quantification. Conclusion: The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in TOF PET systems. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Attenuation correction strategies for multi-energy photon emitters using SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pretorius, P.H.; King, M.A.; Pan, T.S.
1996-12-31
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-07-01
Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.
Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben
2015-01-01
Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904
L.U.St: a tool for approximated maximum likelihood supertree reconstruction.
Akanni, Wasiu A; Creevey, Christopher J; Wilkinson, Mark; Pisani, Davide
2014-06-12
Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Davide.Pisani@bristol.ac.uk.
Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies
NASA Astrophysics Data System (ADS)
Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.
2017-04-01
Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4% ± 3.1% for CBAC and 3.5% ± 3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a crucial step in order to obtain improved activity estimates. Presence of local errors in both MLACF and CBAC based reconstructions would require the use of a normal database for clinical assessment. However, further work is required in order to assess the clinical advantage of MLACF over CBAC based method.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
WE-AB-BRA-08: Correction of Patient Motion in C-Arm Cone-Beam CT Using 3D-2D Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ouadah, S; Jacobson, M; Stayman, JW
2016-06-15
Purpose: Intraoperative C-arm cone-beam CT (CBCT) is subject to artifacts arising from patient motion during the fairly long (∼5–20 s) scan times. We present a fiducial free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in geometric calibration. Methods: A 3D-2D registration process was used to register each projection to DRRs computed from the 3D image by maximizing gradient orientation (GO) using the CMA-ES optimizer. The resulting rigid 6 DOF transforms were applied to the system projection matrices, and a 3D image was reconstructed via model-based image reconstruction (MBIR, which accommodates the resulting noncircularmore » orbit). Experiments were conducted using a Zeego robotic C-arm (20 s, 200°, 496 projections) to image a head phantom undergoing various types of motion: 1) 5° lateral motion; 2) 15° lateral motion; and 3) 5° lateral motion with 10 mm periodic inferior-superior motion. Images were reconstructed using a penalized likelihood (PL) objective function, and structural similarity (SSIM) was measured for axial slices of the reconstructed images. A motion-free image was acquired using the same protocol for comparison. Results: There was significant improvement (p < 0.001) in the SSIM of the motion-corrected (MC) images compared to uncorrected images. The SSIM in MC-PL images was >0.99, indicating near identity to the motion-free reference. The point spread function (PSF) measured from a wire in the phantom was restored to that of the reference in each case. Conclusion: The 3D-2D registration method provides a robust framework for mitigation of motion artifacts and is expected to hold for applications in the head, pelvis, and extremities with reasonably constrained operative setup. Further improvement can be achieved by incorporating multiple rigid components and non-rigid deformation within the framework. The method is highly parallelizable and could in principle be run with every acquisition. Research supported by National Institutes of Health Grant No. R01-EB-017226 and academic-industry partnership with Siemens Healthcare (AX Division, Forcheim, Germany).« less
Fowler, Michael J.; Howard, Marylesa; Luttman, Aaron; ...
2015-06-03
One of the primary causes of blur in a high-energy X-ray imaging system is the shape and extent of the radiation source, or ‘spot’. It is important to be able to quantify the size of the spot as it provides a lower bound on the recoverable resolution for a radiograph, and penumbral imaging methods – which involve the analysis of blur caused by a structured aperture – can be used to obtain the spot’s spatial profile. We present a Bayesian approach for estimating the spot shape that, unlike variational methods, is robust to the initial choice of parameters. The posteriormore » is obtained from a normal likelihood, which was constructed from a weighted least squares approximation to a Poisson noise model, and prior assumptions that enforce both smoothness and non-negativity constraints. A Markov chain Monte Carlo algorithm is used to obtain samples from the target posterior, and the reconstruction and uncertainty estimates are the computed mean and variance of the samples, respectively. Lastly, synthetic data-sets are used to demonstrate accurate reconstruction, while real data taken with high-energy X-ray imaging systems are used to demonstrate applicability and feasibility.« less
McGowen, Michael R
2011-09-01
Oceanic dolphins (Delphinidae) are the product of a rapid radiation that yielded ∼36 extant species of small to medium-sized cetaceans that first emerged in the Late Miocene. Although they are a charismatic group of organisms that have become poster children for marine conservation, many phylogenetic relationships within Delphinidae remain elusive due to the slow molecular evolution of the group and the difficulty of resolving short branches from successive cladogenic events. Here I combine existing and newly generated sequences from four mitochondrial (mt) genes and 20 nuclear (nu) genes to reconstruct a well-supported phylogenetic hypothesis for Delphinidae. This study compares maximum-likelihood and Bayesian inference methods of several data sets including mtDNA, combined nuDNA, gene trees of individual nuDNA loci, and concatenated mtDNA+nuDNA. In addition, I contrast these standard phylogenetic analyses with the species tree reconstruction method of Bayesian concordance analysis (BCA). Despite finding discordance between mtDNA and individual nuDNA loci, the concatenated matrix recovers a completely resolved and robustly supported phylogeny that is also broadly congruent with BCA trees. This study strongly supports groupings such as Delphininae, Lissodelphininae, Globicephalinae, Sotalia+Delphininae, Steno+Orcaella+Globicephalinae, and Leucopleurus acutus, Lagenorhynchus albirostris, and Orcinus orca as basal delphinid taxa. Copyright © 2011 Elsevier Inc. All rights reserved.
Investigation of statistical iterative reconstruction for dedicated breast CT
Makeev, Andrey; Glick, Stephen J.
2013-01-01
Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images were compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue. Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters. Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose. Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose. PMID:23927318
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Particle identification algorithms for the PANDA Endcap Disc DIRC
NASA Astrophysics Data System (ADS)
Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.
2017-12-01
The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.
Phylotranscriptomic analysis of the origin and early diversification of land plants
Wickett, Norman J.; Mirarab, Siavash; Nguyen, Nam; Warnow, Tandy; Carpenter, Eric; Matasci, Naim; Ayyampalayam, Saravanaraj; Barker, Michael S.; Burleigh, J. Gordon; Gitzendanner, Matthew A.; Ruhfel, Brad R.; Wafula, Eric; Graham, Sean W.; Mathews, Sarah; Melkonian, Michael; Soltis, Douglas E.; Soltis, Pamela S.; Miles, Nicholas W.; Rothfels, Carl J.; Pokorny, Lisa; Shaw, A. Jonathan; DeGironimo, Lisa; Stevenson, Dennis W.; Surek, Barbara; Villarreal, Juan Carlos; Roure, Béatrice; Philippe, Hervé; dePamphilis, Claude W.; Chen, Tao; Deyholos, Michael K.; Baucom, Regina S.; Kutchan, Toni M.; Augustin, Megan M.; Wang, Jun; Zhang, Yong; Tian, Zhijian; Yan, Zhixiang; Wu, Xiaolei; Sun, Xiao; Wong, Gane Ka-Shu; Leebens-Mack, James
2014-01-01
Reconstructing the origin and evolution of land plants and their algal relatives is a fundamental problem in plant phylogenetics, and is essential for understanding how critical adaptations arose, including the embryo, vascular tissue, seeds, and flowers. Despite advances in molecular systematics, some hypotheses of relationships remain weakly resolved. Inferring deep phylogenies with bouts of rapid diversification can be problematic; however, genome-scale data should significantly increase the number of informative characters for analyses. Recent phylogenomic reconstructions focused on the major divergences of plants have resulted in promising but inconsistent results. One limitation is sparse taxon sampling, likely resulting from the difficulty and cost of data generation. To address this limitation, transcriptome data for 92 streptophyte taxa were generated and analyzed along with 11 published plant genome sequences. Phylogenetic reconstructions were conducted using up to 852 nuclear genes and 1,701,170 aligned sites. Sixty-nine analyses were performed to test the robustness of phylogenetic inferences to permutations of the data matrix or to phylogenetic method, including supermatrix, supertree, and coalescent-based approaches, maximum-likelihood and Bayesian methods, partitioned and unpartitioned analyses, and amino acid versus DNA alignments. Among other results, we find robust support for a sister-group relationship between land plants and one group of streptophyte green algae, the Zygnematophyceae. Strong and robust support for a clade comprising liverworts and mosses is inconsistent with a widely accepted view of early land plant evolution, and suggests that phylogenetic hypotheses used to understand the evolution of fundamental plant traits should be reevaluated. PMID:25355905
Factors associated with surgical management in an underinsured, safety net population.
Winton, Lisa M; Nodora, Jesse N; Martinez, Maria Elena; Hsu, Chiu-Hsieh; Djenic, Brano; Bouton, Marcia E; Aristizabal, Paula; Ferguson, Elizabeth M; Weiss, Barry D; Komenaka, Ian K
2016-02-01
Few studies include significant numbers of racial and ethnic minority patients. The current study was performed to examine factors that affect breast cancer operations in an underinsured population. We performed a retrospective review of all breast cancer patients from January 2010 to May 2012. Patients with American Joint Committee on Cancer clinical stage 0-IIIA breast cancer underwent evaluation for type of operation: breast conservation, mastectomy alone, and reconstruction after mastectomy. The population included 403 patients with mean age 53 years. Twelve of the 50 patients (24%) diagnosed at stage IIIB presented with synchronous metastatic disease. Of the remaining patients, only 2 presented with metastatic disease (0.6%). The initial operation was 65% breast conservation, 26% mastectomy alone, and 10% reconstruction after mastectomy. Multivariate analysis revealed that Hispanic ethnicity (odds ratio [OR], 0.38; 95% CI, 0.19-0.73; P = .004), presentation with palpable mass (OR, 0.34; 95% CI, 0.13-0.90; P = .03), preoperative chemotherapy (OR, 0.25; 95% CI, 0.10-0.62; P = .003) were associated with a lesser likelihood of mastectomy. Multivariate analysis of factors associated with reconstruction after mastectomy showed that operation with Breast surgical oncologist (OR, 18.4; 95% CI, 2.18-155.14; P < .001) and adequate health literacy (OR, 3.13; 95% CI, 0.95-10.30; P = .06) were associated with reconstruction. The majority of safety net patients can undergo breast conservation despite delayed presentation and poor use of screening mammography. Preoperative chemotherapy increased the likelihood of breast conservation. Routine systemic workup in patients with operable breast cancer is not indicated. Copyright © 2016 Elsevier Inc. All rights reserved.
Helyar, Vincent G; Gupta, Yuri; Blakeway, Lyndall; Charles-Edwards, Geoff; Katsanos, Konstantinos; Karunanithy, Narayan
2018-02-01
This study evaluates the use of balanced steady-state free precession MRI (bSSFP-MRI) in the diagnostic work-up of patients undergoing interventional deep venous reconstruction (I-DVR). Intravenous digital subtraction angiography (IVDSA) was used as the gold-standard for comparison to assess disease extent and severity. A retrospective comparison of bSSFP-MRI to IVDSA was performed in all patients undergoing both examinations for treatment planning prior to I-DVR. The severity of disease in each venous segment was graded by two board-certified radiologists working independently, according to a predetermined classification system. In total, 44 patients (225 venous segments) fulfilled the inclusion criteria. A total of 156 abnormal venous segments were diagnosed using bSSFP-MRI compared with 151 using IVDSA. The prevalence of disease was higher in the iliac and femoral segments (range, 79.6-88.6%). Overall sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and the diagnostic ratio for bSSFP-MRI were 99.3%, 91.9%, 12.3, 0.007 and 1700, respectively. This study supports the use of non-contrast balanced SSFP-MRI in the assessment of the deep veins of the lower limb prior to I-DVR. The technique offers an accurate, fast and non-invasive alternative to IVDSA. Advances in Knowledge: Although balanced SSFP-MRI is commonly used in cardiac imaging, its use elsewhere is limited and its use in evaluating the deep veins prior to interventional reconstruction is not described. Our study demonstrates the usefulness of this technique in the work-up of patients awaiting interventional venous reconstruction compared with the current gold standard.
High Regional Variation in Urethroplasty in the United States
Figler, Bradley D.; Gore, John L.; Holt, Sarah K.; Voelzke, Bryan B.; Wessells, Hunter
2015-01-01
Purpose We identified clinical and regional factors associated with the use of urethroplasty vs repeat endoscopic management for urethral stricture disease. Materials and Methods We analyzed claims for patients 18 to 65 years old in the 2007 to 2011 MarketScan ® Commercial Claims and Encounters Database with a diagnosis of urethral stricture. The primary outcome was treatment with urethroplasty vs repeat endoscopic management, defined as more than 2 dilations or direct vision internal urethrotomies. The likelihood of urethroplasty vs repeat endoscopic management was determined for each major metropolitan area in the United States. Multivariate logistic regression was done to identify factors associated with urethroplasty. Results We identified 41,056 patients with urethral stricture, yielding a diagnosis rate of 296/100,000 men in MarketScan. Repeat endoscopic management and urethroplasty were performed in 2,700 and 1,444 patients, respectively. Compared to patients treated with repeat endoscopic management those with urethroplasty were younger (median age 44 vs 54 years) and more likely to have a Charlson comorbidity score of 0 (84% vs 77%), have traveled out of a metropolitan area for care (34% vs 17%) and have a reconstructive urologist in the treatment metropolitan area (76% and 62%, each p < 0.001). When controlling for age and Charlson comorbidity score, travel out of a metropolitan area (OR 2.7, 95% CI 2.2–3.3) and a reconstructive urologist in the treatment metropolitan area (OR 2.0, 95% CI 1.7–2.5) were associated with a greater likelihood of urethroplasty vs repeat endoscopic management. Conclusions Despite the well established benefits of urethroplasty compared to repeat endoscopic management a strong bias for repeat endoscopic management exists in many regions in the United States. PMID:25072180
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, John
A measurement of the top quark mass in tmore » $$\\bar{t}$$ → l + jets candidate events, obtained from p$$\\bar{p}$$ collisions at √s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector, is presented. The measurement approach is that of a matrix element method. For each candidate event, a two dimensional likelihood is calculated in the top pole mass and a constant scale factor, 'JES', where JES multiplies the input particle jet momenta and is designed to account for the systematic uncertainty of the jet momentum reconstruction. As with all matrix element techniques, the method involves an integration using the Standard Model matrix element for t$$\\bar{t}$$ production and decay. However, the technique presented is unique in that the matrix element is modified to compensate for kinematic assumptions which are made to reduce computation time. Background events are dealt with through use of an event observable which distinguishes signal from background, as well as through a cut on the value of an event's maximum likelihood. Results are based on a 955 pb -1 data sample, using events with a high-p T lepton and exactly four high-energy jets, at least one of which is tagged as coming from a b quark; 149 events pass all the selection requirements. They find M meas = 169.8 ± 2.3(stat.) ± 1.4(syst.) GeV/c 2.« less
Maximum likelihood solution for inclination-only data in paleomagnetism
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2010-08-01
We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.
NASA Astrophysics Data System (ADS)
Keeler, D. G.; Rupper, S.; Schaefer, J. M.; Finkel, R. C.
2015-12-01
The high sensitivity of mountain glaciers to even small perturbations in climate, combined with a near global distribution, make alpine glaciers an important target for terrestrial paleoclimate reconstructions. The geomorphic remnant of past glaciers can yield important insights into past climate, particularly in regions where other methods of reconstruction are not possible. The quantitative conversion of these changes in geomorphology to a climate signal, however, presents a significant challenge. A particular need exists for a versatile climate reconstruction method applicable to diverse glacierized regions around the globe. Because the glacier equilibrium line altitude (ELA) provides a more explicit comparison of climate than properties such as glacier length or area, ELA methods lend themselves well to such a need, and allow for a more direct investigation of the primary drivers of mountain glaciations during specific events. Here, we present an ELA model for quantifying changes in climate based on changes in glacier extent, while accounting for differences in glacier width, glacier shape, bed topography, ice thickness, and glacier length. The model furthermore provides bounds on the ΔELA using Monte Carlo simulations. These methods are validated using published mass balances and ELA measurements from 4 modern glaciers in the European Alps. We then use this ELA model, combined with a surface mass and energy balance model, to estimate the changes in temperature/precipitation between the Younger Dryas (constrained by 10Be surface exposure ages) and the present day for three glacier systems in the Graubϋnden Alps. Our results indicate an ELA depression in this area of 257 m ±45 m during the Younger Dryas (YD) relative to today. This corresponds to a 1.3 °C ±0.36 °C decrease in temperature or a 156% ±30% increase in precipitation relative to today. These results indicate the likelihood of a predominantly temperature-driven change rather than a strong dependence on precipitation. We apply these same methods to additional areas around the globe to obtain preliminary, self-consistent estimates of temperature/precipitation for multiple regions. These methods and results enhance our understanding of the global and regional patterns in the climate system during the YD.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.
Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien
2017-01-01
Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.
Spatiotemporal reconstruction of list-mode PET data.
Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M
2002-04-01
We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.
Reassignment of scattered emission photons in multifocal multiphoton microscopy.
Cha, Jae Won; Singh, Vijay Raj; Kim, Ki Hean; Subramanian, Jaichandar; Peng, Qiwen; Yu, Hanry; Nedivi, Elly; So, Peter T C
2014-06-05
Multifocal multiphoton microscopy (MMM) achieves fast imaging by simultaneously scanning multiple foci across different regions of specimen. The use of imaging detectors in MMM, such as CCD or CMOS, results in degradation of image signal-to-noise-ratio (SNR) due to the scattering of emitted photons. SNR can be partly recovered using multianode photomultiplier tubes (MAPMT). In this design, however, emission photons scattered to neighbor anodes are encoded by the foci scan location resulting in ghost images. The crosstalk between different anodes is currently measured a priori, which is cumbersome as it depends specimen properties. Here, we present the photon reassignment method for MMM, established based on the maximum likelihood (ML) estimation, for quantification of crosstalk between the anodes of MAPMT without a priori measurement. The method provides the reassignment of the photons generated by the ghost images to the original spatial location thus increases the SNR of the final reconstructed image.
Using MOEA with Redistribution and Consensus Branches to Infer Phylogenies.
Min, Xiaoping; Zhang, Mouzhao; Yuan, Sisi; Ge, Shengxiang; Liu, Xiangrong; Zeng, Xiangxiang; Xia, Ningshao
2017-12-26
In recent years, to infer phylogenies, which are NP-hard problems, more and more research has focused on using metaheuristics. Maximum Parsimony and Maximum Likelihood are two effective ways to conduct inference. Based on these methods, which can also be considered as the optimal criteria for phylogenies, various kinds of multi-objective metaheuristics have been used to reconstruct phylogenies. However, combining these two time-consuming methods results in those multi-objective metaheuristics being slower than a single objective. Therefore, we propose a novel, multi-objective optimization algorithm, MOEA-RC, to accelerate the processes of rebuilding phylogenies using structural information of elites in current populations. We compare MOEA-RC with two representative multi-objective algorithms, MOEA/D and NAGA-II, and a non-consensus version of MOEA-RC on three real-world datasets. The result is, within a given number of iterations, MOEA-RC achieves better solutions than the other algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less
NASA Astrophysics Data System (ADS)
Yu, Haitao; Liu, Jing; Cai, Lihui; Wang, Jiang; Cao, Yibin; Hao, Chongqing
2017-02-01
Electroencephalogram (EEG) signal evoked by acupuncture stimulation at "Zusanli" acupoint is analyzed to investigate the modulatory effect of manual acupuncture on the functional brain activity. Power spectral density of EEG signal is first calculated based on the autoregressive Burg method. It is shown that the EEG power is significantly increased during and after acupuncture in delta and theta bands, but decreased in alpha band. Furthermore, synchronization likelihood is used to estimate the nonlinear correlation between each pairwise EEG signals. By applying a threshold to resulting synchronization matrices, functional networks for each band are reconstructed and further quantitatively analyzed to study the impact of acupuncture on network structure. Graph theoretical analysis demonstrates that the functional connectivity of the brain undergoes obvious change under different conditions: pre-acupuncture, acupuncture, and post-acupuncture. The minimum path length is largely decreased and the clustering coefficient keeps increasing during and after acupuncture in delta and theta bands. It is indicated that acupuncture can significantly modulate the functional activity of the brain, and facilitate the information transmission within different brain areas. The obtained results may facilitate our understanding of the long-lasting effect of acupuncture on the brain function.
Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G
2017-11-01
Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV 1 -MTV 2 , MTV 1 -GBSV, MTV 2 -GBSV; gray-levels: 64-32, 64-128, and 64-256; reconstruction algorithms: OSEM-FORE-OSEM, OSEM-FOREFBP, and OSEM-3DRP). We used |d¯| as a measure of radiomic feature reproducibility level, where any feature scored |d¯| ±SD ≤ |25|% ± 35% was considered reproducible. We used Bland-Altman analysis to evaluate the mean, standard deviation (SD), and upper/lower reproducibility limits (U/LRL) for radiomic features in response to variation in each testing parameter. Furthermore, we proposed U/LRL as a method to classify the level of reproducibility: High- ±1% ≤ U/LRL ≤ ±30%; Intermediate- ±30% < U/LRL ≤ ±45%; Low- ±45 < U/LRL ≤ ±50%. We considered any feature below the low level as nonreproducible (NR). Finally, we calculated the interclass correlation coefficient (ICC) to evaluate the reliability of radiomic feature measurements for each parameter. The segmented volumes of 65 patients (81.3%) scored Dice coefficient >0.75 for all three volumes. The result outcomes revealed a tendency of higher radiomic feature reproducibility among segmentation pair MTV 1 -GBSV than MTV 2 -GBSV, gray-level pairs of 64-32 and 64-128 than 64-256, and reconstruction algorithm pairs of OSEM-FOREIR and OSEM-FOREFBP than OSEM-3DRP. Although the choice of cervical tumor segmentation method, gray-level value, and reconstruction algorithm may affect radiomic features, some features were characterized by high reproducibility through all testing parameters. The number of radiomic features that showed insensitivity to variations in segmentation methods, gray-level discretization, and reconstruction algorithms was 10 (13%), 4 (5%), and 1 (1%), respectively. These results suggest that a careful analysis of the effects of these parameters is essential prior to any radiomics clinical application. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Event-by-event PET image reconstruction using list-mode origin ensembles algorithm
NASA Astrophysics Data System (ADS)
Andreyev, Andriy
2016-03-01
There is a great demand for real time or event-by-event (EBE) image reconstruction in emission tomography. Ideally, as soon as event has been detected by the acquisition electronics, it needs to be used in the image reconstruction software. This would greatly speed up the image reconstruction since most of the data will be processed and reconstructed while the patient is still undergoing the scan. Unfortunately, the current industry standard is that the reconstruction of the image would not start until all the data for the current image frame would be acquired. Implementing an EBE reconstruction for MLEM family of algorithms is possible, but not straightforward as multiple (computationally expensive) updates to the image estimate are required. In this work an alternative Origin Ensembles (OE) image reconstruction algorithm for PET imaging is converted to EBE mode and is investigated whether it is viable alternative for real-time image reconstruction. In OE algorithm all acquired events are seen as points that are located somewhere along the corresponding line-of-responses (LORs), together forming a point cloud. Iteratively, with a multitude of quasi-random shifts following the likelihood function the point cloud converges to a reflection of an actual radiotracer distribution with the degree of accuracy that is similar to MLEM. New data can be naturally added into the point cloud. Preliminary results with simulated data show little difference between regular reconstruction and EBE mode, proving the feasibility of the proposed approach.
On authenticity: the question of truth in construction and autobiography.
Collins, Sara
2011-12-01
Freud was occupied with the question of truth and its verification throughout his work. He looked to archaeology for an evidence model to support his ideas on reconstruction. He also referred to literature regarding truth in reconstruction, where he saw shifts between historical fact and invention, and detected such swings in his own case histories. In his late work Freud pondered over the impossibility of truth in reconstruction by juxtaposing truth with 'probability'. Developments on the role of fantasy and myth in reconstruction and contemporary debates over objectivity have increasingly highlighted the question of 'truth' in psychoanalysis. I will argue that 'authenticity' is a helpful concept in furthering the discussion over truth in reconstruction. Authenticity denotes that which is genuine, trustworthy and emotionally accurate in a reconstruction, as observed within the immediacy of the analyst/patient interaction. As authenticity signifies genuineness in a contemporary context its origins are verifiable through the analyst's own observations of the analytic process itself. Therefore, authenticity is about the likelihood and approximation of historical truth rather than its certainty. In that respect it links with Freud's musings over 'probability'. Developments on writing 'truths' in autobiography mirror those in reconstruction, and lend corroborative support from another source. Copyright © 2011 Institute of Psychoanalysis.
Stayman, J Webster; Tilley, Steven; Siewerdsen, Jeffrey H
2014-01-01
Previous investigations [1-3] have demonstrated that integrating specific knowledge of the structure and composition of components like surgical implants, devices, and tools into a model-based reconstruction framework can improve image quality and allow for potential exposure reductions in CT. Using device knowledge in practice is complicated by uncertainties in the exact shape of components and their particular material composition. Such unknowns in the morphology and attenuation properties lead to errors in the forward model that limit the utility of component integration. In this work, a methodology is presented to accommodate both uncertainties in shape as well as unknown energy-dependent attenuation properties of the surgical devices. This work leverages the so-called known-component reconstruction (KCR) framework [1] with a generalized deformable registration operator and modifications to accommodate a spectral transfer function in the component model. Moreover, since this framework decomposes the object into separate background anatomy and "known" component factors, a mixed fidelity forward model can be adopted so that measurements associated with projections through the surgical devices can be modeled with much greater accuracy. A deformable KCR (dKCR) approach using the mixed fidelity model is introduced and applied to a flexible wire component with unknown structure and composition. Image quality advantages of dKCR over traditional reconstruction methods are illustrated in cone-beam CT (CBCT) data acquired on a testbench emulating a 3D-guided needle biopsy procedure - i.e., a deformable component (needle) with strong energy-dependent attenuation characteristics (steel) within a complex soft-tissue background.
A flexible, small positron emission tomography prototype for resource-limited laboratories
NASA Astrophysics Data System (ADS)
Miranda-Menchaca, A.; Martínez-Dávalos, A.; Murrieta-Rodríguez, T.; Alva-Sánchez, H.; Rodríguez-Villafuerte, M.
2015-05-01
Modern small-animal PET scanners typically consist of a large number of detectors along with complex electronics to provide tomographic images for research in the preclinical sciences that use animal models. These systems can be expensive, especially for resource-limited educational and academic institutions in developing countries. In this work we show that a small-animal PET scanner can be built with a relatively reduced budget while, at the same time, achieving relatively high performance. The prototype consists of four detector modules each composed of LYSO pixelated crystal arrays (individual crystal elements of dimensions 1 × 1 × 10 mm3) coupled to position-sensitive photomultiplier tubes. Tomographic images are obtained by rotating the subject to complete enough projections for image reconstruction. Image quality was evaluated for different reconstruction algorithms including filtered back-projection and iterative reconstruction with maximum likelihood-expectation maximization and maximum a posteriori methods. The system matrix was computed both with geometric considerations and by Monte Carlo simulations. Prior to image reconstruction, Fourier data rebinning was used to increase the number of lines of response used. The system was evaluated for energy resolution at 511 keV (best 18.2%), system sensitivity (0.24%), spatial resolution (best 0.87 mm), scatter fraction (4.8%) and noise equivalent count-rate. The system can be scaled-up to include up to 8 detector modules, increasing detection efficiency, and its price may be reduced as newer solid state detectors become available replacing the traditional photomultiplier tubes. Prototypes like this may prove to be very valuable for educational, training, preclinical and other biological research purposes.
Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography
NASA Astrophysics Data System (ADS)
Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.
2014-11-01
Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.
Cierniak, Robert; Lorent, Anna
2016-09-01
The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Treetrimmer: a method for phylogenetic dataset size reduction.
Maruyama, Shinichiro; Eveleigh, Robert J M; Archibald, John M
2013-04-12
With rapid advances in genome sequencing and bioinformatics, it is now possible to generate phylogenetic trees containing thousands of operational taxonomic units (OTUs) from a wide range of organisms. However, use of rigorous tree-building methods on such large datasets is prohibitive and manual 'pruning' of sequence alignments is time consuming and raises concerns over reproducibility. There is a need for bioinformatic tools with which to objectively carry out such pruning procedures. Here we present 'TreeTrimmer', a bioinformatics procedure that removes unnecessary redundancy in large phylogenetic datasets, alleviating the size effect on more rigorous downstream analyses. The method identifies and removes user-defined 'redundant' sequences, e.g., orthologous sequences from closely related organisms and 'recently' evolved lineage-specific paralogs. Representative OTUs are retained for more rigorous re-analysis. TreeTrimmer reduces the OTU density of phylogenetic trees without sacrificing taxonomic diversity while retaining the original tree topology, thereby speeding up downstream computer-intensive analyses, e.g., Bayesian and maximum likelihood tree reconstructions, in a reproducible fashion.
Comparing implementations of penalized weighted least-squares sinogram restoration
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-01-01
Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306
Comparing implementations of penalized weighted least-squares sinogram restoration.
Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick
2010-11-01
A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.
Computational synchronization of microarray data with application to Plasmodium falciparum.
Zhao, Wei; Dauwels, Justin; Niles, Jacquin C; Cao, Jianshu
2012-06-21
Microarrays are widely used to investigate the blood stage of Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data. We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC. By analyzing publicly available microarray data sets for the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions. This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
USDA-ARS?s Scientific Manuscript database
In the mega-diverse insect order Lepidoptera (butterflies and moths; 165,000 species total), 98% of the species fall in the clade Ditrysia, relationships within which are little understood. As the first step in a long-term study of ditrysian phylogeny, we tested the ability of maximum likelihood ana...
An optimal algorithm for reconstructing images from binary measurements
NASA Astrophysics Data System (ADS)
Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin
2010-01-01
We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.
Evolution at the tips: Asclepias phylogenomics and new perspectives on leaf surfaces.
Fishbein, Mark; Straub, Shannon C K; Boutte, Julien; Hansen, Kimberly; Cronn, Richard C; Liston, Aaron
2018-03-01
Leaf surface traits, such as trichome density and wax production, mediate important ecological processes such as anti-herbivory defense and water-use efficiency. We present a phylogenetic analysis of Asclepias plastomes as a framework for analyzing the evolution of trichome density and presence of epicuticular waxes. We produced a maximum-likelihood phylogeny using plastomes of 103 species of Asclepias. We reconstructed ancestral states and used model comparisons in a likelihood framework to analyze character evolution across Asclepias. We resolved the backbone of Asclepias, placing the Sonoran Desert clade and Incarnatae clade as successive sisters to the remaining species. We present novel findings about leaf surface evolution of Asclepias-the ancestor is reconstructed as waxless and sparsely hairy, a macroevolutionary optimal trichome density is supported, and the rate of evolution of trichome density has accelerated. Increased sampling and selection of best-fitting models of evolution provide more resolved and robust estimates of phylogeny and character evolution than obtained in previous studies. Evolutionary inferences are more sensitive to character coding than model selection. © 2018 The Authors. American Journal of Botany is published by Wiley Periodicals, Inc. on behalf of the Botanical Society of America.
Naushad, Sohail; Barkema, Herman W.; Luby, Christopher; Condas, Larissa A. Z.; Nobrega, Diego B.; Carson, Domonique A.; De Buck, Jeroen
2016-01-01
Non-aureus staphylococci (NAS), a heterogeneous group of a large number of species and subspecies, are the most frequently isolated pathogens from intramammary infections in dairy cattle. Phylogenetic relationships among bovine NAS species are controversial and have mostly been determined based on single-gene trees. Herein, we analyzed phylogeny of bovine NAS species using whole-genome sequencing (WGS) of 441 distinct isolates. In addition, evolutionary relationships among bovine NAS were estimated from multilocus data of 16S rRNA, hsp60, rpoB, sodA, and tuf genes and sequences from these and numerous other single genes/proteins. All phylogenies were created with FastTree, Maximum-Likelihood, Maximum-Parsimony, and Neighbor-Joining methods. Regardless of methodology, WGS-trees clearly separated bovine NAS species into five monophyletic coherent clades. Furthermore, there were consistent interspecies relationships within clades in all WGS phylogenetic reconstructions. Except for the Maximum-Parsimony tree, multilocus data analysis similarly produced five clades. There were large variations in determining clades and interspecies relationships in single gene/protein trees, under different methods of tree constructions, highlighting limitations of using single genes for determining bovine NAS phylogeny. However, based on WGS data, we established a robust phylogeny of bovine NAS species, unaffected by method or model of evolutionary reconstructions. Therefore, it is now possible to determine associations between phylogeny and many biological traits, such as virulence, antimicrobial resistance, environmental niche, geographical distribution, and host specificity. PMID:28066335
Attenuation correction strategies for multi-energy photon emitters using SPECT
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.
1997-06-01
The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately. They are recommended for multi-energy photon SPECT imaging quantitation when there is a need to combine the acquisitions of multiple windows.
Variation in the utilization of reconstruction following mastectomy in elderly women.
In, Haejin; Jiang, Wei; Lipsitz, Stuart R; Neville, Bridget A; Weeks, Jane C; Greenberg, Caprice C
2013-06-01
Regardless of their age, women who choose to undergo postmastectomy reconstruction report improved quality of life as a result. However, actual use of reconstruction decreases with increasing age. Whereas this may reflect patient preference and clinical factors, it may also represent age-based disparity. Women aged 65 years or older who underwent mastectomy for DCIS/stage I/II breast cancer (2000-2005) were identified in the SEER-Medicare database. Overall and institutional rates of reconstruction were calculated. Characteristics of hospitals with higher and lower rates of reconstruction were compared. Pseudo-R² statistics utilizing a patient-level logistic regression model estimated the relative contribution of institution and patient characteristics. A total of 19,234 patients at 716 institutions were examined. Overall, 6 % of elderly patients received reconstruction after mastectomy. Institutional rates ranged from zero to >40 %. Whereas 53 % of institutions performed no reconstruction on elderly patients, 5.6 % performed reconstructions on more than 20 %. Although patient characteristics (%ΔR² = 70 %), and especially age (%ΔR² = 34 %), were the primary determinants of reconstruction, institutional characteristics also explained some of the variation (%ΔR² = 16 %). This suggests that in addition to appropriate factors, including clinical characteristics and patient preferences, the use of reconstruction among older women also is influenced by the institution at which they receive care. Variation in the likelihood of reconstruction by institution and the association with structural characteristics suggests unequal access to this critical component of breast cancer care. Increased awareness of a potential age disparity is an important first step to improve access for elderly women who are candidates and desire reconstruction.
Spectral CT of the extremities with a silicon strip photon counting detector
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.
2015-03-01
Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.
Observation of Bs-Bsbar Oscillations Using Partially Reconstructed Hadronic Bs Decays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miles, Jeffrey Robert
2008-02-01
This thesis describes the contribution of partially reconstructed hadronic decays in the world's first observation of Bmore » $$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillations. The analysis is a core member of a suite of closely related studies whose combined time-dependent measurement of the B$$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillation frequency Δm s is of historic significance. Using a data sample of 1 fb -1 of p$$\\bar{p}$$ collisions at √s = 1.96 TeV collected with the CDF-II detector at the Fermilab Tevatron, they find signals of 3150 partially reconstructed hadronic B s decays from the combined decay channels B$$0\\atop{s}$$ → D*$$-\\atop{s}$$ π + and B$$0\\atop{s}$$ → D$$-\\atop{s}$$ ρ + with D$$-\\atop{s}$$ → Φπ -. These events are analyzed in parallel with 2000 fully reconstructed B$$0\\atop{s}$$ → D$$-\\atop{s}$$ π + (D$$-\\atop{s}$$ → Φπ -) decays. The treatment of the data is developed in stages of progressive complexity, using high-statistics samples of hadronic B 0and B + decays to study the attributes of partially reconstructed events. The analysis characterizes the data in mass and proper decay time, noting the potential of the partially reconstructed decays for precise measurement of B branching fractions and lifetimes, but consistently focusing on the effectiveness of the model for the oscillation measurement. They efficiently incorporate the measured quantities of each decay into a maximum likelihood fitting framework, from which they extract amplitude scans and a direct measurement of the oscillation frequency. The features of the amplitude scans are consistent with expected behavior, supporting the correctness of the calibrations for proper time uncertainty and flavor tagging dilution. The likelihood allows for the smooth combination of this analysis with results from other data samples, including 3500 fully reconstructed hadronic B s events and 61,500 partially reconstructed semileptonic B s events. The individual analyses show compelling evidence for B$$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillations, and the combination yields a clear signal. The probability that random fluctuations could produce a comparable signature is 8 x 10 -8, which exceeds the 5 standard deviations threshold of significance for observation. The discovery threshold would not be achieved without inclusion of the partially reconstructed hadronic decays. They measure Δm s = 17.77 ± 0.10(stat) ± 0.07(syst) ps -1 and extract |V td/V ts| = 0.2060 ± 0.0007(exp)$$+0.0081\\atop{-0.0060}$$(theory), consistent with the Standard Model expectation.« less
Investigation of statistical iterative reconstruction for dedicated breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makeev, Andrey; Glick, Stephen J.
2013-08-15
Purpose: Dedicated breast CT has great potential for improving the detection and diagnosis of breast cancer. Statistical iterative reconstruction (SIR) in dedicated breast CT is a promising alternative to traditional filtered backprojection (FBP). One of the difficulties in using SIR is the presence of free parameters in the algorithm that control the appearance of the resulting image. These parameters require tuning in order to achieve high quality reconstructions. In this study, the authors investigated the penalized maximum likelihood (PML) method with two commonly used types of roughness penalty functions: hyperbolic potential and anisotropic total variation (TV) norm. Reconstructed images weremore » compared with images obtained using standard FBP. Optimal parameters for PML with the hyperbolic prior are reported for the task of detecting microcalcifications embedded in breast tissue.Methods: Computer simulations were used to acquire projections in a half-cone beam geometry. The modeled setup describes a realistic breast CT benchtop system, with an x-ray spectra produced by a point source and an a-Si, CsI:Tl flat-panel detector. A voxelized anthropomorphic breast phantom with 280 μm microcalcification spheres embedded in it was used to model attenuation properties of the uncompressed woman's breast in a pendant position. The reconstruction of 3D images was performed using the separable paraboloidal surrogates algorithm with ordered subsets. Task performance was assessed with the ideal observer detectability index to determine optimal PML parameters.Results: The authors' findings suggest that there is a preferred range of values of the roughness penalty weight and the edge preservation threshold in the penalized objective function with the hyperbolic potential, which resulted in low noise images with high contrast microcalcifications preserved. In terms of numerical observer detectability index, the PML method with optimal parameters yielded substantially improved performance (by a factor of greater than 10) compared to FBP. The hyperbolic prior was also observed to be superior to the TV norm. A few of the best-performing parameter pairs for the PML method also demonstrated superior performance for various radiation doses. In fact, using PML with certain parameter values results in better images, acquired using 2 mGy dose, than FBP-reconstructed images acquired using 6 mGy dose.Conclusions: A range of optimal free parameters for the PML algorithm with hyperbolic and TV norm-based potentials is presented for the microcalcification detection task, in dedicated breast CT. The reported values can be used as starting values of the free parameters, when SIR techniques are used for image reconstruction. Significant improvement in image quality can be achieved by using PML with optimal combination of parameters, as compared to FBP. Importantly, these results suggest improved detection of microcalcifications can be obtained by using PML with lower radiation dose to the patient, than using FBP with higher dose.« less
NASA Astrophysics Data System (ADS)
Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.
2013-05-01
The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/
NASA Astrophysics Data System (ADS)
Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.
2010-08-01
Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Deconvolving the wedge: maximum-likelihood power spectra via spherical-wave visibility modelling
NASA Astrophysics Data System (ADS)
Ghosh, A.; Mertens, F. G.; Koopmans, L. V. E.
2018-03-01
Direct detection of the Epoch of Reionization (EoR) via the red-shifted 21-cm line will have unprecedented implications on the study of structure formation in the infant Universe. To fulfil this promise, current and future 21-cm experiments need to detect this weak EoR signal in the presence of foregrounds that are several orders of magnitude larger. This requires extreme noise control and improved wide-field high dynamic-range imaging techniques. We propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere, or equivalently in the uvw-domain. The method uses the one-to-one relation between spherical waves and spherical harmonics (SpH). It consistently handles signals from the entire sky, and does not require a w-term correction. The SpH coefficients represent the sky-brightness distribution and the visibilities in the uvw-domain, and provide a direct estimate of the spatial power spectrum. Using these spectrally smooth SpH coefficients, bright foregrounds can be removed from the signal, including their side-lobe noise, which is one of the limiting factors in high dynamics-range wide-field imaging. Chromatic effects causing the so-called `wedge' are effectively eliminated (i.e. deconvolved) in the cylindrical (k⊥, k∥) power spectrum, compared to a power spectrum computed directly from the images of the foreground visibilities where the wedge is clearly present. We illustrate our method using simulated Low-Frequency Array observations, finding an excellent reconstruction of the input EoR signal with minimal bias.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E
2013-02-25
Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.
Probabilistic modeling of the evolution of gene synteny within reconciled phylogenies
2015-01-01
Background Most models of genome evolution concern either genetic sequences, gene content or gene order. They sometimes integrate two of the three levels, but rarely the three of them. Probabilistic models of gene order evolution usually have to assume constant gene content or adopt a presence/absence coding of gene neighborhoods which is blind to complex events modifying gene content. Results We propose a probabilistic evolutionary model for gene neighborhoods, allowing genes to be inserted, duplicated or lost. It uses reconciled phylogenies, which integrate sequence and gene content evolution. We are then able to optimize parameters such as phylogeny branch lengths, or probabilistic laws depicting the diversity of susceptibility of syntenic regions to rearrangements. We reconstruct a structure for ancestral genomes by optimizing a likelihood, keeping track of all evolutionary events at the level of gene content and gene synteny. Ancestral syntenies are associated with a probability of presence. We implemented the model with the restriction that at most one gene duplication separates two gene speciations in reconciled gene trees. We reconstruct ancestral syntenies on a set of 12 drosophila genomes, and compare the evolutionary rates along the branches and along the sites. We compare with a parsimony method and find a significant number of results not supported by the posterior probability. The model is implemented in the Bio++ library. It thus benefits from and enriches the classical models and methods for molecular evolution. PMID:26452018
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
An improved image alignment procedure for high-resolution transmission electron microscopy.
Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua
2010-06-01
Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.
Real-Time Imaging System for the OpenPET
NASA Astrophysics Data System (ADS)
Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga
2012-02-01
The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.
Limited angle C-arm tomosynthesis reconstruction algorithms
NASA Astrophysics Data System (ADS)
Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying
2015-03-01
In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.
NASA Astrophysics Data System (ADS)
Gennaretti, Fabio; Huard, David; Naulier, Maud; Savard, Martine; Bégin, Christian; Arseneault, Dominique; Guiot, Joel
2017-12-01
Northeastern North America has very few millennium-long, high-resolution climate proxy records. However, very recently, a new tree-ring dataset suitable for temperature reconstructions over the last millennium was developed in the northern Quebec taiga. This dataset is composed of one δ18O and six ring width chronologies. Until now, these chronologies have only been used in independent temperature reconstructions (from δ18O or ring width) showing some differences. Here, we added to the dataset a δ13C chronology and developed a significantly improved millennium-long multiproxy reconstruction (997-2006 CE) accounting for uncertainties with a Bayesian approach that evaluates the likelihood of each proxy model. We also undertook a methodological sensitivity analysis to assess the different responses of each proxy to abrupt forcings such as strong volcanic eruptions. Ring width showed a larger response to single eruptions and a larger cumulative impact of multiple eruptions during active volcanic periods, δ18O showed intermediate responses, and δ13C was mostly insensitive to volcanic eruptions. We conclude that all reconstructions based on a single proxy can be misleading because of the possible reduced or amplified responses to specific forcing agents.
Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood
ERIC Educational Resources Information Center
Karabatsos, George
2017-01-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…
Shi, Cheng-Min; Yang, Ziheng
2018-01-01
Abstract The phylogenetic relationships among extant gibbon species remain unresolved despite numerous efforts using morphological, behavorial, and genetic data and the sequencing of whole genomes. A major challenge in reconstructing the gibbon phylogeny is the radiative speciation process, which resulted in extremely short internal branches in the species phylogeny and extensive incomplete lineage sorting with extensive gene-tree heterogeneity across the genome. Here, we analyze two genomic-scale data sets, with ∼10,000 putative noncoding and exonic loci, respectively, to estimate the species tree for the major groups of gibbons. We used the Bayesian full-likelihood method bpp under the multispecies coalescent model, which naturally accommodates incomplete lineage sorting and uncertainties in the gene trees. For comparison, we included three heuristic coalescent-based methods (mp-est, SVDQuartets, and astral) as well as concatenation. From both data sets, we infer the phylogeny for the four extant gibbon genera to be (Hylobates, (Nomascus, (Hoolock, Symphalangus))). We used simulation guided by the real data to evaluate the accuracy of the methods used. Astral, while not as efficient as bpp, performed well in estimation of the species tree even in presence of excessive incomplete lineage sorting. Concatenation, mp-est and SVDQuartets were unreliable when the species tree contains very short internal branches. Likelihood ratio test of gene flow suggests a small amount of migration from Hylobates moloch to H. pileatus, while cross-genera migration is absent or rare. Our results highlight the utility of coalescent-based methods in addressing challenging species tree problems characterized by short internal branches and rampant gene tree-species tree discordance. PMID:29087487
VoPham, Trang; Wilson, John P; Ruddell, Darren; Rashed, Tarek; Brooks, Maria M; Yuan, Jian-Min; Talbott, Evelyn O; Chang, Chung-Chou H; Weissfeld, Joel L
2015-08-01
Accurate pesticide exposure estimation is integral to epidemiologic studies elucidating the role of pesticides in human health. Humans can be exposed to pesticides via residential proximity to agricultural pesticide applications (drift). We present an improved geographic information system (GIS) and remote sensing method, the Landsat method, to estimate agricultural pesticide exposure through matching pesticide applications to crops classified from temporally concurrent Landsat satellite remote sensing images in California. The image classification method utilizes Normalized Difference Vegetation Index (NDVI) values in a combined maximum likelihood classification and per-field (using segments) approach. Pesticide exposure is estimated according to pesticide-treated crop fields intersecting 500 m buffers around geocoded locations (e.g., residences) in a GIS. Study results demonstrate that the Landsat method can improve GIS-based pesticide exposure estimation by matching more pesticide applications to crops (especially temporary crops) classified using temporally concurrent Landsat images compared to the standard method that relies on infrequently updated land use survey (LUS) crop data. The Landsat method can be used in epidemiologic studies to reconstruct past individual-level exposure to specific pesticides according to where individuals are located.
Jeon, Jihyoun; Hsu, Li; Gorfine, Malka
2012-07-01
Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.
Huo, Jinhai; Smith, Benjamin D; Giordano, Sharon H; Reece, Gregory P; Tina Shih, Ya-Chen
2016-12-01
The objectives of this study were to compare, by patient obesity status, the contemporary utilization patterns of different reconstruction surgery types, understand postoperative complication profiles in the community setting, and analyze the financial impact on health care payers and patients. Using data from the MarketScan Health Risk Assessment Database and Commercial Claims and Encounters Database, we identified breast cancer patients who received breast reconstruction surgery following mastectomy between 2009 and 2012. The Cochran-Armitage test was used to evaluate the utilization pattern of breast reconstruction surgery. Multivariable logistic regressions were used to estimate the association between obesity status and infectious, wound, and perfusion complications within one year of surgery. A generalized linear model was used to compare total, complication-related, and out-of-pocket costs. The rate of TE/implant-based reconstruction increased significantly for non-obese patients but not for obese patients during the years analyzed, whereas autologous reconstruction decreased for both patient groups. Obesity was associated with higher odds of infectious, wound, and perfusion complications after TE/implant-based reconstruction, and higher odds of perfusion complications after autologous reconstruction. The adjusted total healthcare costs and out-of-pocket costs were similar for obese and non-obese patients for either type of breast reconstruction surgery. A greater likelihood of one-year complications arose from TE/implant-based vs autologous reconstruction surgery in obese patients. Given that out-of-pocket costs were independent of the type of reconstruction, greater emphasis should be placed on conveying the surgery-related complications to obese patients to aid in patient-based decision making with their plastic surgeons and oncologists. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement of charm meson production in Au+Au collisions at √S NN =200 GEV
NASA Astrophysics Data System (ADS)
Quintero, Amilkar
The study and characterization of nuclear matter under extreme conditions of temperature and pressure, and a full understanding of deconfined partonic matter, the Quark Gluon Plasma (QGP), are major goals of modern high-energy nuclear physics. Heavy quarks (charm and bottom) are formed mainly in the early stages of the collision. Open heavy flavor measurements, e.g. D0, D+/-, DS, are excellent tools to probe and study the hot and dense medium formed in heavy ion collisions. Details of their interaction with the surrounding medium can be studied through energy loss and elliptic flow measurements thus providing valuable information about the nature of the medium and its degree of thermalization. Initial indirect reconstruction studies of heavy quark particles using the electrons from heavy flavor decays, showed a large magnitude of energy loss that was inconsistent with model predictions and assumptions, at the time. Precise measurements of fully reconstructed heavy mesons would provide better understanding of the energy loss mechanisms and the properties of the formed medium. In relativistic heavy ion collisions, the relatively low abundance of heavy quarks and their short lifetimes makes them difficult to distinguish from the event vertex and the combinatorial background; therefore the need for a high precision vertex detector to reconstruct their decay particles. In 2014 a new micro vertex detector was installed in the STAR experiment at Brookhaven National Lab. The Heavy Flavor Tracker (HFT) was designed to perform direct topological reconstruction of the weak decays of heavy flavor particles. The HFT improves STAR track pointing resolution from a few millimeters to ˜30 microns for 1 GeV/c pions, allowing direct reconstruction of short lifetime particles. Although the results of the open charm meson reconstruction using the HFT improved dramatically there is still a lot of room for optimization, especially for reconstructed particles with low transverse momentum (< 1 GeV/c). The standard reconstruction algorithm in the STAR experiment is based on a helix swimming of the reconstructed tracks. This method consists of finding the distance of closest approach between the two helices and defining the midpoint as the decay particle's vertex position. In this work we are using an algorithm based on the Kalman filter to perform full vertex reconstruction. Although the Kalman filter is the most common fitting and filtering method used in tracking, it is not commonly used for particle reconstruction. By using the Kalman filter, the full error matrix for each track is taken into account in the calculations, performing a more complete approach to vertex reconstruction of the charm mesons by providing error estimates on all reconstructed quantities. Also in the traditional analyses, rectangular cuts are made to the reconstructed parameters of the candidate particle decay in order to improve the signal to background ratio and get the cleanest signal possible. In this analysis we use multivariate techniques (i.e. machine learning) to maximize the efficiency of the acquired signal. Machine learning techniques are widely used in many data analysis problems and are also in wide use in high-energy physics experiments. Different optimization methods are tested like Likelihood, Neural Networks. The one with the better performance for reconstruction of D0 mesons was found to be the Binary Decision Trees (BDT). We have applied these analysis techniques on our Run-14 data sample (~1.2 billion Au+Au events at 200 GeV) and we present results for D0 meson pT spectra and nuclear modification factor (RAA) for different event centralities. We discuss the obtained results and compare with current theory models.
Enabling Security, Stability, Transition, and Reconstruction Operations through Knowledge Management
2009-03-18
strategy. Overall, the cultural barriers to knowledge sharing center on knowledge creation and capture. The primary barrier to knowledge sharing is lack ... Lacking a shared identity decreases the likelihood of knowledge sharing, which is essential to effective collaboration.84 Related to collaboration...to adapt, develop, and change based on experience-derived knowledge.90 A second cultural barrier to knowledge acquisition is the lack receptiveness
NASA Astrophysics Data System (ADS)
Verdon-Kidd, Danielle C.; Hancock, Gregory R.; Lowry, John B.
2017-11-01
The Monsoonal North West (MNW) region of Australia faces a number of challenges adapting to anthropogenic climate change. These have the potential to impact on a range of industries, including agricultural, pastoral, mining and tourism. However future changes to rainfall regimes remain uncertain due to the inability of Global Climate Models to adequately capture the tropical weather/climate processes that are known to be important for this region. Compounding this is the brevity of the instrumental rainfall record for the MNW, which is unlikely to represent the full range of climatic variability. One avenue for addressing this issue (the focus of this paper) is to identify sources of paleoclimate information that can be used to reconstruct a plausible pre-instrumental rainfall history for the MNW. Adopting this approach we find that, even in the absence of local sources of paleoclimate data at a suitable temporal resolution, remote paleoclimate records can resolve 25% of the annual variability observed in the instrumental rainfall record. Importantly, the 507-year rainfall reconstruction developed using the remote proxies displays longer and more intense wet and dry periods than observed during the most recent 100 years. For example, the maximum number of consecutive years of below (above) average rainfall is 90% (40%) higher in the rainfall reconstruction than during the instrumental period. Further, implications for flood and drought risk are studied via a simple GR1A rainfall runoff model, which again highlights the likelihood of extremes greater than that observed in the limited instrumental record, consistent with previous paleoclimate studies elsewhere in Australia. Importantly, this research can assist in informing climate related risks to infrastructure, agriculture and mining, and the method can readily be applied to other regions in the MNW and beyond.
Snaebjörnsson, Thorkell; Hamrin Senorski, Eric; Ayeni, Olufemi R; Alentorn-Geli, Eduard; Krupic, Ferid; Norberg, Fredrik; Karlsson, Jón; Samuelsson, Kristian
2017-07-01
Anterior cruciate ligament (ACL) reconstruction (ACLR) using a hamstring tendon (HT) autograft is an effective and widespread method. Recent studies have identified a relationship between the graft diameter and revision ACLR. To evaluate the influence of the graft diameter on revision ACLR and patient-reported outcomes in patients undergoing primary ACLR using HT autografts. Cohort study; Level of evidence, 2. A prospective cohort study was conducted using the Swedish National Knee Ligament Register (SNKLR) involving all patients undergoing primary ACLR using HT autografts. Patients with graft failure who needed revision surgery (cases) were compared with patients not undergoing revision surgery (controls). The control group was matched for sex, age, and graft fixation method in a 3:1 ratio. Conditional logistic regression was performed to produce odds ratios and 95% CIs. Univariate linear regression analyses were performed for patient-related outcomes. The Knee injury and Osteoarthritis Outcome Score (KOOS) and EuroQol 5 dimensions questionnaire (EQ-5D) values were obtained. A total of 2240 patients were included in which there were 560 cases and 1680 controls. No significant differences between the cases and controls were found for sex (52.9% male), mean age (21.7 years), and femoral and tibial fixation. The mean graft diameter for the cases was 8.0 ± 0.74 mm and for the controls was 8.1 ± 0.76 mm. In the present cohort, the likelihood of revision surgery for every 0.5-mm increase in the HT autograft diameter between 7.0 and 10.0 mm was 0.86 (95% CI, 0.75-0.99; P = .03). Univariate linear regression analysis found no significant regression coefficient for the change in KOOS or EQ-5D values. In a large cohort of patients after primary ACLR with HT autografts, an increase in the graft diameter between 7.0 and 10.0 mm resulted in a 0.86 times lower likelihood of revision surgery with every 0.5-mm increase. This study provides further evidence of the importance of the HT autograft size in intraoperative decision making.
Correction of patient motion in cone-beam CT using 3D-2D registration
NASA Astrophysics Data System (ADS)
Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.
2017-12-01
Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was >0.995, with significant improvement (p < 0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.
Sensitivity recovery for the AX-PET prototype using inter-crystal scattering events
NASA Astrophysics Data System (ADS)
Gillam, John E.; Solevi, Paola; Oliver, Josep F.; Casella, Chiara; Heller, Matthieu; Joram, Christian; Rafecas, Magdalena
2014-08-01
The development of novel detection devices and systems such as the AX-positron emission tomography (PET) demonstrator often introduce or increase the measurement of atypical coincidence events such as inter-crystal scattering (ICS). In more standard systems, ICS events often go undetected and the small measured fraction may be ignored. As the measured quantity of such events in the data increases, so too does the importance of considering them during image reconstruction. Generally, treatment of ICS events will attempt to determine which of the possible candidate lines of response (LoRs) correctly determine the annihilation photon trajectory. However, methods of assessment often have low success rates or are computationally demanding. In this investigation alternative approaches are considered. Experimental data was taken using the AX-PET prototype and a NEMA phantom. Three methods of ICS treatment were assessed—each of which considered all possible candidate LoRs during image reconstruction. Maximum likelihood expectation maximization was used in conjunction with both standard (line-like) and novel (V-like in this investigation) detection responses modeled within the system matrix. The investigation assumed that no information other than interaction locations was available to distinguish between candidates, yet the methods assessed all provided means by which such information could be included. In all cases it was shown that the signal to noise ratio is increased using ICS events. However, only one method, which used full modeling of the ICS response in the system matrix—the V-like model—provided enhancement in all figures of merit assessed in this investigation. Finally, the optimal method of ICS incorporation was demonstrated using data from two small animals measured using the AX-PET demonstrator.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.
1996-12-31
Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximizationmore » (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts.« less
NASA Astrophysics Data System (ADS)
Horesh, Lior
Electrical Impedance Tomography (EIT) is a recently developed imaging technique. Small insensible currents are injected into the body using electrodes. Measured voltages are used for reconstruction of images of the internal dielectric properties of the body. This imaging technique is portable, safe, rapid, inexpensive and has the potential to provide a new method for imaging in remote or acute situations, where other large scanners, such as MRI, are either impractical or unavailable. It has been in use in clinical research for about two decades but has not yet been adopted into routine clinical practice. One potentially powerful clinical application lies in its use for imaging acute stroke, where it could be used to distinguish haemorrhage from infarction. Hitherto, image reconstruction has mainly been for the more tractable case of changes in impedance over time. For acute stroke, it is best operated in multiple frequency mode, where data is collected at multiple frequencies and images can be recovered with higher fidelity. Whereas the eventual idea appears to be good, there are several important issues which affect the likelihood of its success in producing clinically reliable images. These include limitations in accuracy of finite element modelling, image reconstruction, and accuracy of recorded voltage data due to noise and confounding factors. The purpose of this work was to address these issues in the hope that, at the end, a clinical study of EIT in acute stroke would have a much greater chance of success. In order to address the feasibility of this application, a comprehensive literature review regarding the dielectric properties of human head tissues in normal and pathological states was conducted in this thesis. Novel generic tools were developed in order to enable modelling and non-linear image reconstruction of large-scale problems, such as those arising from the head EIT problem.
On the quirks of maximum parsimony and likelihood on phylogenetic networks.
Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles
2017-03-21
Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dimension-independent likelihood-informed MCMC
Cui, Tiangang; Law, Kody J. H.; Marzouk, Youssef M.
2015-10-08
Many Bayesian inference problems require exploring the posterior distribution of highdimensional parameters that represent the discretization of an underlying function. Our work introduces a family of Markov chain Monte Carlo (MCMC) samplers that can adapt to the particular structure of a posterior distribution over functions. There are two distinct lines of research that intersect in the methods we develop here. First, we introduce a general class of operator-weighted proposal distributions that are well defined on function space, such that the performance of the resulting MCMC samplers is independent of the discretization of the function. Second, by exploiting local Hessian informationmore » and any associated lowdimensional structure in the change from prior to posterior distributions, we develop an inhomogeneous discretization scheme for the Langevin stochastic differential equation that yields operator-weighted proposals adapted to the non-Gaussian structure of the posterior. The resulting dimension-independent and likelihood-informed (DILI) MCMC samplers may be useful for a large class of high-dimensional problems where the target probability measure has a density with respect to a Gaussian reference measure. Finally, we use two nonlinear inverse problems in order to demonstrate the efficiency of these DILI samplers: an elliptic PDE coefficient inverse problem and path reconstruction in a conditioned diffusion.« less
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.
McCoy, Airlie J
2002-10-01
Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu; Zhang, Jiahan
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work.more » Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data.« less
Li, Si; Zhang, Jiahan; Krol, Andrzej; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin; Lipson, Edward; Feiglin, David; Xu, Yuesheng
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data. PMID:26233214
Shi, Chaoyang; Kojima, Masahiro; Tercero, Carlos; Najdovski, Zoran; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto
2014-12-01
There are several complications associated with Stent-assisted Coil Embolization (SACE) in cerebral aneurysm treatments, due to damaging operations by surgeons and undesirable mechanical properties of stents. Therefore, it is necessary to develop an in vitro simulator that provides both training and research for evaluating the mechanical properties of stents. A new in vitro simulator for three-dimensional digital subtraction angiography was constructed, followed by aneurysm models fabricated with new materials. Next, this platform was used to provide training and to conduct photoelastic stress analysis to evaluate the SACE technique. The average interaction stress increasingly varied for the two different stents. Improvements for the Maximum-Likelihood Expectation-Maximization method were developed to reconstruct cross-sections with both thickness and stress information. The technique presented can improve a surgeon's skills and quantify the performance of stents to improve mechanical design and classification. This method can contribute to three-dimensional stress and volume variation evaluation and assess a surgeon's skills. Copyright © 2013 John Wiley & Sons, Ltd.
Identification of complex stiffness tensor from waveform reconstruction
NASA Astrophysics Data System (ADS)
Leymarie, N.; Aristégui, C.; Audoin, B.; Baste, S.
2002-03-01
An inverse method is proposed in order to determine the viscoelastic properties of composite-material plates from the plane-wave transmitted acoustic field. Analytical formulations of both the plate transmission coefficient and its first and second derivatives are established, and included in a two-step inversion scheme. Two objective functions to be minimized are then designed by considering the well-known maximum-likelihood principle and by using an analytic signal formulation. Through these innovative objective functions, the robustness of the inversion process against high level of noise in waveforms is improved and the method can be applied to a very thin specimen. The suitability of the inversion process for viscoelastic property identification is demonstrated using simulated data for composite materials with different anisotropy and damping degrees. A study of the effect of the rheologic model choice on the elastic property identification emphasizes the relevance of using a phenomenological description considering viscosity. Experimental characterizations show then the good reliability of the proposed approach. Difficulties arise experimentally for particular anisotropic media.
Back to Normal! Gaussianizing posterior distributions for cosmological probes
NASA Astrophysics Data System (ADS)
Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.
2014-05-01
We present a method to map multivariate non-Gaussian posterior probability densities into Gaussian ones via nonlinear Box-Cox transformations, and generalizations thereof. This is analogous to the search for normal parameters in the CMB, but can in principle be applied to any probability density that is continuous and unimodal. The search for the optimally Gaussianizing transformation amongst the Box-Cox family is performed via a maximum likelihood formalism. We can judge the quality of the found transformation a posteriori: qualitatively via statistical tests of Gaussianity, and more illustratively by how well it reproduces the credible regions. The method permits an analytical reconstruction of the posterior from a sample, e.g. a Markov chain, and simplifies the subsequent joint analysis with other experiments. Furthermore, it permits the characterization of a non-Gaussian posterior in a compact and efficient way. The expression for the non-Gaussian posterior can be employed to find analytic formulae for the Bayesian evidence, and consequently be used for model comparison.
Estimating linear-nonlinear models using Rényi divergences
Kouh, Minjoon; Sharpee, Tatyana O.
2009-01-01
This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981
Estimating linear-nonlinear models using Renyi divergences.
Kouh, Minjoon; Sharpee, Tatyana O
2009-01-01
This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.
Lampe, David J; Witherspoon, David J; Soto-Adames, Felipe N; Robertson, Hugh M
2003-04-01
We report the isolation and sequencing of genomic copies of mariner transposons involved in recent horizontal transfers into the genomes of the European earwig, Forficula auricularia; the European honey bee, Apis mellifera; the Mediterranean fruit fly, Ceratitis capitata; and a blister beetle, Epicauta funebris, insects from four different orders. These elements are in the mellifera subfamily and are the second documented example of full-length mariner elements involved in this kind of phenomenon. We applied maximum likelihood methods to the coding sequences and determined that the copies in each genome were evolving neutrally, whereas reconstructed ancestral coding sequences appeared to be under selection, which strengthens our previous hypothesis that the primary selective constraint on mariner sequence evolution is the act of horizontal transfer between genomes.
Evolution of dinosaur epidermal structures.
Barrett, Paul M; Evans, David C; Campione, Nicolás E
2015-06-01
Spectacularly preserved non-avian dinosaurs with integumentary filaments/feathers have revolutionized dinosaur studies and fostered the suggestion that the dinosaur common ancestor possessed complex integumentary structures homologous to feathers. This hypothesis has major implications for interpreting dinosaur biology, but has not been tested rigorously. Using a comprehensive database of dinosaur skin traces, we apply maximum-likelihood methods to reconstruct the phylogenetic distribution of epidermal structures and interpret their evolutionary history. Most of these analyses find no compelling evidence for the appearance of protofeathers in the dinosaur common ancestor and scales are usually recovered as the plesiomorphic state, but results are sensitive to the outgroup condition in pterosaurs. Rare occurrences of ornithischian filamentous integument might represent independent acquisitions of novel epidermal structures that are not homologous with theropod feathers. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Paudel, M; MacKenzie, M; Fallone, B; Rathee, S
2012-06-01
To evaluate the performance of a model based image reconstruction in reducing metal artifacts in MVCT systems, and to compare with filtered-back projection (FBP) technique. Iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with pair/triplet production process and the energy dependent response of detectors. The beam spectra for in-house bench-top and TomotherapyTM MVCT are modelled for use in IMPACT. The energy dependent gain of detectors is calculated using a constrained optimization technique and measured attenuation produced by 0 - 24 cm thick solid water slabs. A cylindrical (19 cm diameter) plexiglass phantom containing various central cylindrical inserts (relative electron density of 0.28-1.69) between two steel rods (2 cm diameter) is scanned in the bench-top [the bremsstrahlung radiation from 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C] and TomotherapyTM MVCTs. The FBP reconstructs images from raw signal normalised to air scan and corrected for beam hardening using a uniform plexi-glass cylinder (20 cm diameter). IMPACT starts with FBP reconstructed seed image and reconstructs final image at 1.25 MeV in 150 iterations. FBP produces a visible dark shading in the image between two steel rods that becomes darker with higher density central insert causing 5-8 % underestimation of electron density compared to the case without the steel rods. In the IMPACT image the dark shading connecting the steel rods is nearly removed and the uniform background restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding theoretical values at 1.25 MeV. The dark shading metal artifact due to beam hardening can be removed in MVCT using the iterative reconstruction algorithm such as IMPACT. However, the accurate modelling of detectors' energy dependent response and physical processes are crucial for successful implementation. Funding support for the research is obtained from "Vanier Canada Graduate Scholarship" and "Canadian Institute of Health Research". © 2012 American Association of Physicists in Medicine.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
NASA Astrophysics Data System (ADS)
Stützer, K.; Bert, C.; Enghardt, W.; Helmbrecht, S.; Parodi, K.; Priegnitz, M.; Saito, N.; Fiedler, F.
2013-08-01
In-beam positron emission tomography (PET) has been proven to be a reliable technique in ion beam radiotherapy for the in situ and non-invasive evaluation of the correct dose deposition in static tumour entities. In the presence of intra-fractional target motion an appropriate time-resolved (four-dimensional, 4D) reconstruction algorithm has to be used to avoid reconstructed activity distributions suffering from motion-related blurring artefacts and to allow for a dedicated dose monitoring. Four-dimensional reconstruction algorithms from diagnostic PET imaging that can properly handle the typically low counting statistics of in-beam PET data have been adapted and optimized for the characteristics of the double-head PET scanner BASTEI installed at GSI Helmholtzzentrum Darmstadt, Germany (GSI). Systematic investigations with moving radioactive sources demonstrate the more effective reduction of motion artefacts by applying a 4D maximum likelihood expectation maximization (MLEM) algorithm instead of the retrospective co-registration of phasewise reconstructed quasi-static activity distributions. Further 4D MLEM results are presented from in-beam PET measurements of irradiated moving phantoms which verify the accessibility of relevant parameters for the dose monitoring of intra-fractionally moving targets. From in-beam PET listmode data sets acquired together with a motion surrogate signal, valuable images can be generated by the 4D MLEM reconstruction for different motion patterns and motion-compensated beam delivery techniques.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method
NASA Astrophysics Data System (ADS)
Ardianti, Fitri; Sutarman
2018-01-01
In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.
Blind deconvolution of 2-D and 3-D fluorescent micrographs
NASA Astrophysics Data System (ADS)
Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.
1992-06-01
This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Camp, Christopher L; Conte, Stan; D'Angelo, John; Fealy, Stephen A; Ahmad, Christopher S
2018-05-01
In recent years, there has been a dramatic rise in the annual number of ulnar collateral ligament (UCL) reconstructions performed in amateur baseball pitchers. Accordingly, increasing numbers of players are entering professional baseball having already undergone the procedure; however, the effect of prior UCL reconstruction on future success remains unknown. (1) To provide an epidemiologic report on baseball players who undergo UCL reconstruction before being selected in the Major League Baseball (MLB) Draft, (2) to define the outcomes in terms of statistical performance, and (3) to compare these results with those of matched controls (ie, non-UCL reconstruction). Cohort study; Level of evidence, 3. The MLB Amateur Draft Database was queried to identify all drafted pitchers who underwent UCL reconstruction before being drafted. For each pitcher drafted from 2005 to 2014 with prior UCL reconstruction, 3 healthy controls with no history of elbow surgery were randomly identified for matched analysis. A number of demographic and performance comparisons were made between these groups. A total of 345 pitchers met inclusion criteria. The annual number of pitchers undergoing predraft UCL reconstructions rose steadily from 2005 to 2016 ( P < .001). For matched control analysis, 252 pitchers with a UCL reconstruction and a minimum 2-year follow-up (drafted between 2005 and 2014) were matched to 756 controls (non-UCL reconstruction). As compared with the non-UCL reconstruction group, pitchers who underwent predraft UCL reconstruction reached the MLB level with greater frequency (20% vs 12%, P = .003), and their MLB statistical performances were similar for all measures. Compared with all other pitchers drafted during that period, players who had a predraft UCL reconstruction demonstrated an increased likelihood of reaching progressive levels of play (Full Season A, AA, and MLB) within a given time frame ( P < .05 for all). The number of UCL reconstructions performed in amateur baseball players before the draft increased year over year for the entire study period. Professional pitchers who underwent UCL reconstruction as amateurs appear to perform at least as well as, if not better than, matched controls without elbow surgery.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Szöllősi, Gergely J.; Boussau, Bastien; Abby, Sophie S.; Tannier, Eric; Daubin, Vincent
2012-01-01
The timing of the evolution of microbial life has largely remained elusive due to the scarcity of prokaryotic fossil record and the confounding effects of the exchange of genes among possibly distant species. The history of gene transfer events, however, is not a series of individual oddities; it records which lineages were concurrent and thus provides information on the timing of species diversification. Here, we use a probabilistic model of genome evolution that accounts for differences between gene phylogenies and the species tree as series of duplication, transfer, and loss events to reconstruct chronologically ordered species phylogenies. Using simulations we show that we can robustly recover accurate chronologically ordered species phylogenies in the presence of gene tree reconstruction errors and realistic rates of duplication, transfer, and loss. Using genomic data we demonstrate that we can infer rooted species phylogenies using homologous gene families from complete genomes of 10 bacterial and archaeal groups. Focusing on cyanobacteria, distinguished among prokaryotes by a relative abundance of fossils, we infer the maximum likelihood chronologically ordered species phylogeny based on 36 genomes with 8,332 homologous gene families. We find the order of speciation events to be in full agreement with the fossil record and the inferred phylogeny of cyanobacteria to be consistent with the phylogeny recovered from established phylogenomics methods. Our results demonstrate that lateral gene transfers, detected by probabilistic models of genome evolution, can be used as a source of information on the timing of evolution, providing a valuable complement to the limited prokaryotic fossil record. PMID:23043116
NASA Astrophysics Data System (ADS)
Li, Jianyong; Dodson, John; Yan, Hong; Cheng, Bo; Zhang, Xiaojian; Xu, Qinghai; Ni, Jian; Lu, Fengyan
2017-05-01
Quantitative information regarding the long-term variability of precipitation and vegetation during the period covering both the Late Glacial and the Holocene on the Qinghai-Tibetan Plateau (QTP) is scarce. Herein, we provide new and numerical reconstructions for annual mean precipitation (PANN) and vegetation history over the last 18,000 years using high-resolution pollen data from Lakes Dalianhai and Qinghai on the northeastern QTP. Hitherto, five calibration techniques including weighted averaging, weighted average-partial least squares regression, modern analogue technique, locally weighted weighted averaging regression, and maximum likelihood were first employed to construct robust inference models and to produce reliable PANN estimates on the QTP. The biomization method was applied for reconstructing the vegetation dynamics. The study area was dominated by steppe and characterized with a highly variable, relatively dry climate at 18,000-11,000 cal years B.P. PANN increased since the early Holocene, obtained a maximum at 8000-3000 cal years B.P. with coniferous-temperate mixed forest as the dominant biome, and thereafter declined to present. The PANN reconstructions are broadly consistent with other proxy-based paleoclimatic records from the northeastern QTP and the northern region of monsoonal China. The possible mechanisms behind the precipitation changes may be tentatively attributed to the internal feedback processes of higher latitude (e.g., North Atlantic) and lower latitude (e.g., subtropical monsoon) competing climatic regimes, which are primarily modulated by solar energy output as the external driving force. These findings may provide important insights into understanding the future Asian precipitation dynamics under the projected global warming.
Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction
NASA Astrophysics Data System (ADS)
Wang, Adam S.; Webster Stayman, J.; Otake, Yoshito; Kleinszig, Gerhard; Vogt, Sebastian; Gallia, Gary L.; Khanna, A. Jay; Siewerdsen, Jeffrey H.
2014-02-01
The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (˜40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2× over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ˜1.7 mGy and benefits from 50% sparsity at dose below ˜1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.
NASA Astrophysics Data System (ADS)
Lojacono, Xavier; Richard, Marie-Hélène; Ley, Jean-Luc; Testa, Etienne; Ray, Cédric; Freud, Nicolas; Létang, Jean Michel; Dauvergne, Denis; Maxim, Voichiţa; Prost, Rémy
2013-10-01
The Compton camera is a relevant imaging device for the detection of prompt photons produced by nuclear fragmentation in hadrontherapy. It may allow an improvement in detection efficiency compared to a standard gamma-camera but requires more sophisticated image reconstruction techniques. In this work, we simulate low statistics acquisitions from a point source having a broad energy spectrum compatible with hadrontherapy. We then reconstruct the image of the source with a recently developed filtered backprojection algorithm, a line-cone approach and an iterative List Mode Maximum Likelihood Expectation Maximization algorithm. Simulated data come from a Compton camera prototype designed for hadrontherapy online monitoring. Results indicate that the achievable resolution in directions parallel to the detector, that may include the beam direction, is compatible with the quality control requirements. With the prototype under study, the reconstructed image is elongated in the direction orthogonal to the detector. However this direction is of less interest in hadrontherapy where the first requirement is to determine the penetration depth of the beam in the patient. Additionally, the resolution may be recovered using a second camera.
2014-01-01
Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398
Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.
Poon, Art F Y
2015-09-01
The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Hernández-Hernández, Tania; Hernández, Héctor M; De-Nova, J Arturo; Puente, Raul; Eguiarte, Luis E; Magallón, Susana
2011-01-01
Cactaceae is one of the most charismatic plant families because of the extreme succulence and outstanding diversity of growth forms of its members. Although cacti are conspicuous elements of arid ecosystems in the New World and are model systems for ecological and anatomical studies, the high morphological convergence and scarcity of phenotypic synapomorphies make the evolutionary relationships and trends among lineages difficult to understand. We performed phylogenetic analyses implementing parsimony ratchet and likelihood methods, using a concatenated matrix with 6148 bp of plastid and nuclear markers (trnK/matK, matK, trnL-trnF, rpl16, and ppc). We included 224 species representing approximately 85% of the family's genera. Likelihood methods were used to perform an ancestral character reconstruction within Cactoideae, the richest subfamily in terms of morphological diversity and species number, to evaluate possible growth form evolutionary trends. Our phylogenetic results support previous studies showing the paraphyly of subfamily Pereskioideae and the monophyly of subfamilies Opuntioideae and Cactoideae. After the early divergence of Blossfeldia, Cactoideae splits into two clades: Cacteae, including North American globose and barrel-shaped members, and core Cactoideae, including the largest diversity of growth forms distributed throughout the American continent. Para- or polyphyly is persistent in different parts of the phylogeny. Main Cactoideae clades were found to have different ancestral growth forms, and convergence toward globose, arborescent, or columnar forms occurred in different lineages. Our study enabled us to provide a detailed hypothesis of relationships among cacti lineages and represents the most complete general phylogenetic framework available to understand evolutionary trends within Cactaceae.
A Distance Measure for Genome Phylogenetic Analysis
NASA Astrophysics Data System (ADS)
Cao, Minh Duc; Allison, Lloyd; Dix, Trevor
Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.
New prior sampling methods for nested sampling - Development and testing
NASA Astrophysics Data System (ADS)
Stokes, Barrie; Tuyl, Frank; Hudson, Irene
2017-06-01
Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].
Synthesizing Regression Results: A Factored Likelihood Method
ERIC Educational Resources Information Center
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
Mugleston, Joseph D; Song, Hojun; Whiting, Michael F
2013-12-01
The phylogenetic relationships of Tettigoniidae (katydids and bush-crickets) were inferred using molecular sequence data. Six genes (18S rDNA, 28S rDNA, Cytochrome Oxidase II, Histone 3, Tubulin Alpha I, and Wingless) were sequenced for 135 ingroup taxa representing 16 of the 19 extant katydid subfamilies. Five subfamilies (Tettigoniinae, Pseudophyllinae, Mecopodinae, Meconematinae, and Listroscelidinae) were found to be paraphyletic under various tree reconstruction methods (Maximum Likelihood, Bayesisan Inference and Maximum Parsimony). Seven subfamilies - Conocephalinae, Hetrodinae, Hexacentrinae, Saginae, Phaneropterinae, Phyllophorinae, and Lipotactinae - were each recovered as well-supported monophyletic groups. We mapped the small and exposed thoracic auditory spiracle (a defining character of the subfamily Pseudophyllinae) and found it to be homoplasious. We also found the leaf-like wings of katydids have been derived independently in at least six lineages. Copyright © 2013 Elsevier Inc. All rights reserved.
Asteroid models from photometry and complementary data sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaasalainen, Mikko
I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze,more » and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.« less
Lee, Ming-Min; Stock, S Patricia
2010-09-01
Nematodes of the genus Steinernema Travassos, 1927 (Nematoda: Steinernematidae) and their associated bacteria, Xenorhabdus spp. (gamma-Proteobacteria), are an emergent model of terrestrial animal-microbe symbiosis. Interest in this association initially arose out of their potential as biocontrol agents against insect pests, but, despite advances in their field application and the growing popularity of this model system, relatively little has been published to uncover the evolutionary facets of this beneficial partnership. This study adds to the body of knowledge regarding nematode-bacteria symbiosis by proposing a possible scenario for their historical association in the form of a cophylogenetic hypothesis. Topological and likelihood based testing methods were employed to reconstruct a history of association between 30 host-symbiont pairs and to gauge the level of similarity between their inferred phylogenetic patterns.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.
Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar
2013-10-15
High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.
Logerstedt, David; Grindem, Hege; Lynch, Andrew; Eitzen, Ingrid; Engebretsen, Lars; Risberg, May Arna; Axe, Michael J.; Snyder-Mackler, Lynn
2012-01-01
Background Single-legged hop tests are commonly used functional performance measures that can capture limb asymmetries in patients after anterior cruciate ligament (ACL) reconstruction. Hop tests hold potential as predictive factors of self-reported knee function in individuals after ACL reconstruction. Hypothesis Single-legged hop tests conducted preoperatively would not and 6 months after ACL reconstruction would predict self-reported knee function (International Knee Documentation Committee [IKDC] 2000) 1 year after ACL reconstruction. Study Design Cohort study (prognosis); Level of evidence, 2. Methods One hundred twenty patients who were treated with ACL reconstruction performed 4 single-legged hop tests preoperatively and 6 months after ACL reconstruction. Self-reported knee function within normal ranges was defined as IKDC 2000 scores greater than or equal to the age- and sex-specific normative 15th percentile score 1 year after surgery. Logistic regression analyses were performed to identify predictors of self-reported knee function within normal ranges. The area under the curve (AUC) from receiver operating characteristic curves was used as a measure of discriminative accuracy. Results Eighty-five patients completed single-legged hop tests 6 months after surgery and the 1-year follow-up with 68 patients classified as having self-reported knee function within normal ranges 1 year after reconstruction. The crossover hop and 6-m timed hop limb symmetry index (LSI) 6 months after ACL reconstruction were the strongest individual predictors of self-reported knee function (odds ratio, 1.09 and 1.10) and the only 2 tests in which the confidence intervals of the discriminatory accuracy (AUC) were above 0.5 (AUC = 0.68). Patients with knee function below normal ranges were over 5 times more likely of having a 6-m timed hop LSI lower than the 88% cutoff than those with knee function within normal ranges. Patients with knee function within normal ranges were 4 times more likely to have a crossover hop LSI greater than the 95% cutoff than those with knee function below normal ranges. No preoperative single-legged hop test predicted self-reported knee function within normal ranges 1 year after ACL reconstruction (all P > .353). Conclusion Single-legged hop tests conducted 6 months after ACL reconstruction can predict the likelihood of successful and unsuccessful outcome 1 year after ACL reconstruction. Patients demonstrating less than the 88% cutoff score on the 6-m timed hop test at 6 months may benefit from targeted training to improve limb symmetry in an attempt to normalize function. Patients with minimal side-to-side differences on the crossover hop test at 6 months possibly will have good knee function at 1 year if they continue with their current training regimen. Preoperative single-legged hop tests are not able to predict postoperative outcomes. PMID:22926749
Local motion-compensated method for high-quality 3D coronary artery reconstruction
Liu, Bo; Bai, Xiangzhi; Zhou, Fugen
2016-01-01
The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2013-04-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaire, H.; Barat, E.; Carrel, F.
In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)
Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15
ERIC Educational Resources Information Center
Zhang, Jinming
2005-01-01
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographsmore » is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Luo; Yidong Xia; Robert Nourgaliev
2011-05-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less
NASA Astrophysics Data System (ADS)
Yee, Eugene
2007-04-01
Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.
Description and properties of a resistive network applied to emission tomography detector readouts
NASA Astrophysics Data System (ADS)
Boisson, F.; Bekaert, V.; Sahr, J.; Brasse, D.
2017-11-01
Over the last twenty years, PET systems have used discrete crystal detector modules coupled to multi-channel photodetectors, mostly to improve the spatial resolution. Although reading each readout channels individually would be of great interest, costs associated with the electronics would, in most cases, be too expensive. It is therefore essential to propose lower-cost solutions that do not degrade the overall system's performance. One possible solution to reduce the development costs of a PET system without degrading performance is the use of a resistive network which reduces the total number of readout channels. In this study, we present a symmetric charge division resistive network and associated software methods to assess the performance of a PET detector. Our approach consists in keeping the n lines and n columns information provided by a symmetric charge division circuit (SCD). We provided equations relative to output currents of the network, which enable estimation of the charge. We propose a novel approach to reconstruct the charge distribution from the lines and columns projection using a maximum likelihood expectation maximization (MLEM) approach which takes the non-uniformity of the photodetector channel gains into account. We also introduce a mathematical proof of the relation between the sigma of the reconstructed charge distribution and the Ratio between the line of interest (maximum value) and the background signal charges. To the best of our knowledge, this is the first study reporting these equations. Preliminary results obtained with a resistive network used in readout of a monolithic 50 × 50 × 8mm3 LYSO crystal coupled to a H9500 PMT validated the effectiveness of the reconstructed charge distribution to optimize both the x and y spatial resolution and the energy resolution. We obtained a mean x and y spatial resolution of 1.10 mm FWHM and a 14.7% energy resolution by calculating the integral of the reconstructed charge distribution. Finally, the relation between the ratio and the sigma of the reconstructed charge distribution may provide new opportunities in terms of Depth-of-Interaction estimation when using a monolithic crystal coupled to a multi-channel photodetector.
Rodriguez, Juanita; Pitts, James P; Florez, Jaime A; Bond, Jason E; von Dohlen, Carol D
2016-01-01
Pompilinae is one of the largest subfamilies of spider wasps (Pompilidae). Most pompilines are generalist spider predators at the family level, but some taxa exhibit ecological specificity (i.e., to spider-host guild). Here we present the first molecular phylogenetic analysis of Pompilinae, toward the aim of evaluating the monophyly of tribes and genera. We further test whether changes in the rate of diversification are associated with host-guild shifts. Molecular data were collected from five nuclear loci (28S, EF1-F2, LWRh, Wg, Pol2) for 76 taxa in 39 genera. Data were analyzed using maximum likelihood (ML) and Bayesian inference (BI). The phylogenetic results were compared with previous hypotheses of subfamilial and tribal classification, as well as generic relationships in the subfamily. The classification of Pompilus and Agenioideus is also discussed. A Bayesian relaxed molecular clock analysis was used to examine divergence times. Diversification rate-shift tests accounted for taxon-sampling bias using ML and BI approaches. Ancestral host family and host guild were reconstructed using MP and ML methods. Ancestral host guild for all Pompilinae, for the ancestor at the node where a diversification rate-shift was detected, and two more nodes back in time was inferred using BI. In the resulting phylogenies, Aporini was the only previously proposed monophyletic tribe. Several genera (e.g., Pompilus, Microphadnus and Schistonyx) are also not monophyletic. Dating analyses produced a well-supported chronogram consistent with topologies from BI and ML results. The BI ancestral host-use reconstruction inferred the use of spiders belonging to the guild "other hunters" (frequenting the ground and vegetation) as the ancestral state for Pompilinae. This guild had the highest probability for the ML reconstruction and was equivocal for the MP reconstruction; various switching events to other guilds occurred throughout the evolution of the group. The diversification of Pompilinae shows one main rate-shift coinciding with a shift to ground-hunter spiders, as reconstructed by the BI ancestral character-state analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
Timmermans, Floyd W; Westland, Pèdrou B; Hummelink, Stefan; Schreurs, Joep; Hameeteman, Marijn; Ulrich, Dietmar J O; Slater, Nicholas J
2018-06-01
The deep inferior epigastric artery perforator (DIEP) flap is one of the most common techniques for breast reconstruction. Body mass index (BMI) is considered as an important predictor of donor site healing complications such as wound dehiscence. The use of computed tomography (CT) proved to be a precise and objective method to assess visceral adipose tissue. It remains unclear whether quantification of visceral fat provides more accurate predictions of abdominal wound healing complications than BMI. A total of 97 patients with DIEP flap were retrospectively evaluated. Patients' abdominal visceral fat (AVF) was quantified on CT angiography (CTA). The patients were postoperatively assessed for abdominal wound healing complications. We analyzed for the correlations between AVF, BMI, and dehiscence and established a logistic regression model to assess the potential high-profile predictors in anatomic and patient characteristics such as weight, smoking, and diabetes. We included 97 patients, and of them, 24 patients (24.7%) had some degree of abdominal dehiscence. No significant differences were observed between the dehiscence group and the non-dehiscence group, except for smoking (p = 0.002). We found a significant correlation between AVF and BMI (R = 0.282, p = 0.005), but neither was significant in predicting donor site dehiscence. Smoking greatly increased the likelihood of developing wound dehiscence (OR = 11.4, p = < 0.001). AVF and BMI were not significant predictors of abdominal wound healing complications after DIEP flap reconstruction. This study established active smoking (OR = 11.4, p = < 0.001) as the significant risk factor that contributed to the development of abdominal wound dehiscence in patients with DIEP. Copyright © 2018 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Korall, Petra; Pryer, Kathleen M
2014-01-01
Aim Scaly tree ferns, Cyatheaceae, are a well-supported group of mostly tree-forming ferns found throughout the tropics, the subtropics and the south-temperate zone. Fossil evidence shows that the lineage originated in the Late Jurassic period. We reconstructed large-scale historical biogeographical patterns of Cyatheaceae and tested the hypothesis that some of the observed distribution patterns are in fact compatible, in time and space, with a vicariance scenario related to the break-up of Gondwana. Location Tropics, subtropics and south-temperate areas of the world. Methods The historical biogeography of Cyatheaceae was analysed in a maximum likelihood framework using Lagrange. The 78 ingroup taxa are representative of the geographical distribution of the entire family. The phylogenies that served as a basis for the analyses were obtained by Bayesian inference analyses of mainly previously published DNA sequence data using MrBayes. Lineage divergence dates were estimated in a Bayesian Markov chain Monte Carlo framework using beast. Results Cyatheaceae originated in the Late Jurassic in either South America or Australasia. Following a range expansion, the ancestral distribution of the marginate-scaled clade included both these areas, whereas Sphaeropteris is reconstructed as having its origin only in Australasia. Within the marginate-scaled clade, reconstructions of early divergences are hampered by the unresolved relationships among the Alsophila, Cyathea and Gymnosphaera lineages. Nevertheless, it is clear that the occurrence of the Cyathea and Sphaeropteris lineages in South America may be related to vicariance, whereas transoceanic dispersal needs to be inferred for the range shifts seen in Alsophila and Gymnosphaera. Main conclusions The evolutionary history of Cyatheaceae involves both Gondwanan vicariance scenarios as well as long-distance dispersal events. The number of transoceanic dispersals reconstructed for the family is rather few when compared with other fern lineages. We suggest that a causal relationship between reproductive mode (outcrossing) and dispersal limitations is the most plausible explanation for the pattern observed. PMID:25435648
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
Multiconstrained gene clustering based on generalized projections
2010-01-01
Background Gene clustering for annotating gene functions is one of the fundamental issues in bioinformatics. The best clustering solution is often regularized by multiple constraints such as gene expressions, Gene Ontology (GO) annotations and gene network structures. How to integrate multiple pieces of constraints for an optimal clustering solution still remains an unsolved problem. Results We propose a novel multiconstrained gene clustering (MGC) method within the generalized projection onto convex sets (POCS) framework used widely in image reconstruction. Each constraint is formulated as a corresponding set. The generalized projector iteratively projects the clustering solution onto these sets in order to find a consistent solution included in the intersection set that satisfies all constraints. Compared with previous MGC methods, POCS can integrate multiple constraints from different nature without distorting the original constraints. To evaluate the clustering solution, we also propose a new performance measure referred to as Gene Log Likelihood (GLL) that considers genes having more than one function and hence in more than one cluster. Comparative experimental results show that our POCS-based gene clustering method outperforms current state-of-the-art MGC methods. Conclusions The POCS-based MGC method can successfully combine multiple constraints from different nature for gene clustering. Also, the proposed GLL is an effective performance measure for the soft clustering solutions. PMID:20356386
The orbital PDF: general inference of the gravitational potential from steady-state tracers
NASA Astrophysics Data System (ADS)
Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.
2016-02-01
We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.
Yamaguchi, M; Miya, M; Okiyama, M; Nishida, M
2000-04-01
Larvae of the deep-sea lanternfish genus Hygophum (Myctophidae) exhibit a remarkable morphological diversity that is quite unexpected, considering their homogeneous adult morphology. In an attempt to elucidate the evolutionary patterns of such larval morphological diversity, nucleotide sequences of a portion of the mitochondrially encoded 16S ribosomal RNA gene were determined for seven Hygophum species and three outgroup taxa. Secondary structure-based alignment resulted in a character matrix consisting of 1172 bp of unambiguously aligned sequences, which were subjected to phylogenetic analyses using maximum-parsimony, maximum-likelihood, and neighbor-joining methods. The resultant tree topologies from the three methods were congruent, with most nodes, including that of the genus Hygophum, being strongly supported by various tree statistics. The most parsimonious reconstruction of the three previously recognized, distinct larval morphs onto the molecular phylogeny revealed that one of the morphs had originated as the common ancestor of the genus, the other two having diversified separately in two subsequent major clades. The patterns of such diversification are discussed in terms of the unusual larval eye morphology and geographic distribution. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Ochesanu, S.; Rougny, R.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Daci, N.; Heracleous, N.; Keaveney, J.; Lowette, S.; Maes, M.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Dobur, D.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Léonard, A.; Mohammadi, A.; Perniè, L.; Randle-conde, A.; Reis, T.; Seva, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Zenoni, F.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Crucy, S.; Fagot, A.; Garcia, G.; Mccartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva Diblen, S.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; Da Silveira, G. G.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Nuttens, C.; Pagano, D.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Júnior, W. L. Aldá; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Martins, T. Dos Reis; Molina, J.; Mora Herrera, C.; Pol, M. E.; Rebello Teles, P.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santaolalla, J.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Aleksandrov, A.; Genchev, V.; Hadjiiska, R.; Iaydjiev, P.; Marinov, A.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Tao, J.; Wang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zhang, F.; Zhang, L.; Zou, W.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Mekterovic, D.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M.; Assran, Y.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Eerola, P.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Busson, P.; Chapon, E.; Charlot, C.; Dahms, T.; Dobrzynski, L.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Veelken, C.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Beaupere, N.; Bernet, C.; Boudoul, G.; Bouvier, E.; Brochet, S.; Carrillo Montoya, C. A.; Chasserat, J.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Bontenackels, M.; Edelhoff, M.; Feld, L.; Heister, A.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Sammet, J.; Schael, S.; Schulte, J. F.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Mittag, G.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behr, J.; Behrens, U.; Bell, A. J.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Garcia, J. Garay; Geiser, A.; Gizhko, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Ribeiro Cipriano, P. M.; Roland, B.; Ron, E.; Sahin, M. Ö.; Salfeld-Nebgen, J.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Vargas Trevino, A. D. R.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Kirschenmann, H.; Klanner, R.; Kogler, R.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Nowatschin, D.; Ott, J.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Poehlsen, T.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Seidel, M.; Sola, V.; Stadie, H.; Steinbrück, G.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Akbiyik, M.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Frensch, F.; Giffels, M.; Gilbert, A.; Hartmann, F.; Hauth, T.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Mozer, M. U.; Müller, T.; Müller, Th.; Nürnberg, A.; Quast, G.; Rabbertz, K.; Röcker, S.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Tziaferi, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Strologas, J.; Paradas, E.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Swain, S. K.; Beri, S. B.; Bhatnagar, V.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, M.; Kumar, R.; Mittal, M.; Nishu, N.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Sharma, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gallo, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Ferretti, R.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Tosi, S.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Dosselli, U.; Fanzago, F.; Gasparini, F.; Gasparini, U.; Gonella, F.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Fiori, F.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Moon, C. S.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; D'imperio, G.; Del Re, D.; Diemoz, M.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Soffi, L.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Umer, T.; Zanetti, A.; Chang, S.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Park, H.; Sakharov, A.; Son, D. C.; Kim, T. J.; Ryu, M. S.; Kim, J. Y.; Moon, D. H.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K. S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, J. H.; Park, I. C.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Juodagalvis, A.; Komaragiri, J. R.; Md Ali, M. A. B.; Wan Abdullah, W. A. T.; Casimiro Linares, E.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Reucroft, S.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Shulha, S.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Graziano, A.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Bloch, P.; Bocci, A.; Bonato, A.; Bondu, O.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; David, A.; De Guio, F.; De Roeck, A.; De Visscher, S.; Di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Guida, R.; Hammer, J.; Hansen, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kortelainen, M. J.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Magini, N.; Malgeri, L.; Mannelli, M.; Marrouche, J.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pimiä, M.; Piparo, D.; Plagge, M.; Racz, A.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Wollny, H.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Buchmann, M. A.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Grab, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marini, A. C.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meister, D.; Mohr, N.; Musella, P.; Nägeli, C.; Nessi-Tedaldi, F.; Pandolfi, F.; Pauss, F.; Perrozzi, L.; Peruzzi, M.; Quittnat, M.; Rebane, L.; Rossini, M.; Starodumov, A.; Takahashi, M.; Theofilatos, K.; Wallny, R.; Weber, H. A.; Amsler, C.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Salerno, D.; Taroni, S.; Yang, Y.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Chang, P.; Chang, Y. H.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Grundler, U.; Hou, W.-S.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Petrakou, E.; Tsai, J. f.; Tzeng, Y. M.; Wilken, R.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Bilin, B.; Bilmis, S.; Gamsizkan, H.; Isildak, B.; Karapinar, G.; Ocalan, K.; Sekmen, S.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Albayrak, E. A.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, T.; Cankocak, K.; Vardarlı, F. I.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Senkin, S.; Smith, V. J.; Williams, T.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Burton, D.; Colling, D.; Cripps, N.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Dunne, P.; Elwood, A.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mathias, B.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Pastika, N.; Scarborough, T.; Wu, Z.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Lawson, P.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Zou, D.; Alimena, J.; Berry, E.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Sagir, S.; Sinthuprasith, T.; Speer, T.; Swanson, J.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Rakness, G.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Rikova, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Negrete, M. Olmedo; Shrinivas, A.; Sumowidagdo, S.; Wimpenny, S.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Palmer, C.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Tu, Y.; Vartak, A.; Welke, C.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Danielson, T.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Incandela, J.; Justus, C.; Mccoll, N.; Mullin, S. D.; Richman, J.; Stuart, D.; To, W.; West, C.; Yoo, J.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Iiyama, Y.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Krohn, M.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Skinnari, L.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitbeck, A.; Whitmore, J.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carver, M.; Curry, D.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Field, R. D.; Fisher, M.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Rinkevicius, A.; Shchutska, L.; Snowball, M.; Sperka, D.; Yelton, J.; Zakaria, M.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, J. R.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Bilki, B.; Clarida, W.; Dilsiz, K.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Rahmat, R.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Swartz, M.; Xiao, M.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Gray, J.; Kenny, R. P.; Majumder, D.; Malek, M.; Murray, M.; Noonan, D.; Sanders, S.; Sekaric, J.; Stringer, R.; Wang, Q.; Wood, J. S.; Chakaberia, I.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Saini, L. K.; Skhirtladze, N.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Belloni, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Mignerey, A. C.; Pedro, K.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Baty, A.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Klute, M.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Sumorok, K.; Velicanu, D.; Veverka, J.; Wyslouch, B.; Yang, M.; Yoon, A. S.; Zanetti, M.; Zhukova, V.; Dahmes, B.; De Benedetti, A.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Nourbakhsh, S.; Rusack, R.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Cremaldi, L. M.; Kroeger, R.; Oliveros, S.; Perera, L.; Sanders, D. A.; Summers, D.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Gonzalez Suarez, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Meier, F.; Ratnikov, F.; Snow, G. R.; Zvada, M.; Dolen, J.; Godshalk, A.; Iashvili, I.; Jain, S.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Massironi, A.; Nash, D.; Orimoto, T.; Trocino, D.; Wood, D.; Zhang, J.; Anastassov, A.; Hahn, K. A.; Kubik, A.; Lusito, L.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Trovato, M.; Velasco, M.; Won, S.; Brinkerhoff, A.; Chan, K. M.; Drozdetskiy, A.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Musienko, Y.; Pearson, T.; Planer, M.; Ruchti, R.; Valls, N.; Smith, G.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wolfe, H.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Malik, S.; Mendez, H.; Ramirez Vargas, J. E.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; Gutay, L.; Hu, Z.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Leonardo, N.; Miller, D. H.; Neumeister, N.; Primavera, F.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Zablocki, J.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Michlin, B.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Korjenevski, S.; Petrillo, G.; Verzetti, M.; Vishnevskiy, D.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Lath, A.; Panwalkar, S.; Park, M.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Dalchenko, M.; De Mattia, M.; Dildick, S.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Krutelyov, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Patel, R.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Suarez, I.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kovitanggoon, K.; Kunori, S.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wolfe, E.; Wood, J.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Friis, E.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Lazaridis, C.; Levine, A.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Sarangi, T.; Savin, A.; Smith, W. H.; Taylor, D.; Vuosalo, C.; Woods, N.; CMS Collaboration
2015-06-01
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5 collected by the CMS experiment at the LHC in collisions at a centre-of-mass energy of 8. In order to separate the signal from the larger + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, , relative to the standard model prediction for a Higgs boson mass of 125. The observed (expected) exclusion limit at a 95 % confidence level is (3.3), corresponding to a best fit value.
Reconstruction-of-difference (RoD) imaging for cone-beam CT neuro-angiography
NASA Astrophysics Data System (ADS)
Wu, P.; Stayman, J. W.; Mow, M.; Zbijewski, W.; Sisniega, A.; Aygun, N.; Stevens, R.; Foos, D.; Wang, X.; Siewerdsen, J. H.
2018-06-01
Timely evaluation of neurovasculature via CT angiography (CTA) is critical to the detection of pathology such as ischemic stroke. Cone-beam CTA (CBCT-A) systems provide potential advantages in the timely use at the point-of-care, although challenges of a relatively slow gantry rotation speed introduce tradeoffs among image quality, data consistency and data sparsity. This work describes and evaluates a new reconstruction-of-difference (RoD) approach that is robust to such challenges. A fast digital simulation framework was developed to test the performance of the RoD over standard reference reconstruction methods such as filtered back-projection (FBP) and penalized likelihood (PL) over a broad range of imaging conditions, grouped into three scenarios to test the trade-off between data consistency, data sparsity and peak contrast. Two experiments were also conducted using a CBCT prototype and an anthropomorphic neurovascular phantom to test the simulation findings in real data. Performance was evaluated primarily in terms of normalized root mean square error (NRMSE) in comparison to truth, with reconstruction parameters chosen to optimize performance in each case to ensure fair comparison. The RoD approach reduced NRMSE in reconstructed images by up to 50%–53% compared to FBP and up to 29%–31% compared to PL for each scenario. Scan protocols well suited to the RoD approach were identified that balance tradeoffs among data consistency, sparsity and peak contrast—for example, a CBCT-A scan with 128 projections acquired in 8.5 s over a 180° + fan angle half-scan for a time attenuation curve with ~8.5 s time-to-peak and 600 HU peak contrast. With imaging conditions such as the simulation scenarios of fixed data sparsity (i.e. varying levels of data consistency and peak contrast), the experiments confirmed the reduction of NRMSE by 34% and 17% compared to FBP and PL, respectively. The RoD approach demonstrated superior performance in 3D angiography compared to FBP and PL in all simulation and physical experiments, suggesting the possibility of CBCT-A on low-cost, mobile imaging platforms suitable to the point-of-care. The algorithm demonstrated accurate reconstruction with a high degree of robustness against data sparsity and inconsistency.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Colen, David L; Carney, Martin J; Shubinets, Valeriy; Lanni, Michael A; Liu, Tiffany; Levin, L Scott; Lee, Gwo-Chin; Kovach, Stephen J
2018-04-01
Total knee arthroplasty is a common orthopedic procedure in the United States and complications can be devastating. Soft-tissue compromise or joint infection may cause failure of prosthesis requiring knee fusion or amputation. The role of a plastic surgeon in total knee arthroplasty is critical for cases requiring optimization of the soft-tissue envelope. The purpose of this study was to elucidate factors associated with total knee arthroplasty salvage following complications and clarify principles of reconstruction to optimize outcomes. A retrospective review of patients requiring soft-tissue reconstruction performed by the senior author after total knee arthroplasty over 8 years was completed. Logistic regression and Fisher's exact tests determined factors associated with the primary outcome, prosthesis salvage versus knee fusion or amputation. Seventy-three knees in 71 patients required soft-tissue reconstruction (mean follow-up, 1.8 years), with a salvage rate of 61.1 percent, mostly using medial gastrocnemius flaps. Patients referred to our institution with complicated periprosthetic wounds were significantly more likely to lose their knee prosthesis than patients treated only within our system. Patients with multiple prior knee operations before definitive soft-tissue reconstruction had significantly decreased rates of prosthesis salvage and an increased risk of amputation. Knee salvage significantly decreased with positive joint cultures (Gram-negative greater than Gram-positive organisms) and particularly at the time of definitive reconstruction, which also trended toward an increased risk of amputation. In revision total knee arthroplasty, prompt soft-tissue reconstruction improves the likelihood of success, and protracted surgical courses and contamination increase failure and amputations. The authors show a benefit to involving plastic surgeons early in the course of total knee arthroplasty complications to optimize genicular soft tissues. Therapeutic, III.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆
López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874
Reconstruction of interaction rate in holographic dark energy
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.
Real-Time 3D Tracking and Reconstruction on Mobile Phones.
Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D
2015-05-01
We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.
Jiang, Shanghai
2017-01-01
X-ray fluorescence computed tomography (XFCT) based on sheet beam can save a huge amount of time to obtain a whole set of projections using synchrotron. However, it is clearly unpractical for most biomedical research laboratories. In this paper, polychromatic X-ray fluorescence computed tomography with sheet-beam geometry is tested by Monte Carlo simulation. First, two phantoms (A and B) filled with PMMA are used to simulate imaging process through GEANT 4. Phantom A contains several GNP-loaded regions with the same size (10 mm) in height and diameter but different Au weight concentration ranging from 0.3% to 1.8%. Phantom B contains twelve GNP-loaded regions with the same Au weight concentration (1.6%) but different diameter ranging from 1 mm to 9 mm. Second, discretized presentation of imaging model is established to reconstruct more accurate XFCT images. Third, XFCT images of phantoms A and B are reconstructed by filter back-projection (FBP) and maximum likelihood expectation maximization (MLEM) with and without correction, respectively. Contrast-to-noise ratio (CNR) is calculated to evaluate all the reconstructed images. Our results show that it is feasible for sheet-beam XFCT system based on polychromatic X-ray source and the discretized imaging model can be used to reconstruct more accurate images. PMID:28567054
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
NASA Astrophysics Data System (ADS)
Schafer, Sebastian; Wang, Adam; Otake, Yoshito; Stayman, J. W.; Zbijewski, Wojciech; Kleinszig, Gerhard; Xia, Xuewei; Gallia, Gary L.; Siewerdsen, Jeffrey H.
2013-03-01
Intraoperative imaging could improve patient safety and quality assurance (QA) via the detection of subtle complications that might otherwise only be found hours after surgery. Such capability could therefore reduce morbidity and the need for additional intervention. Among the severe adverse events that could be more quickly detected by high-quality intraoperative imaging is acute intracranial hemorrhage (ICH), conventionally assessed using post-operative CT. A mobile C-arm capable of high-quality cone-beam CT (CBCT) in combination with advanced image reconstruction techniques is reported as a means of detecting ICH in the operating room. The system employs an isocentric C-arm with a flat-panel detector in dual gain mode, correction of x-ray scatter and beam-hardening, and a penalized likelihood (PL) iterative reconstruction method. Performance in ICH detection was investigated using a quantitative phantom focusing on (non-contrast-enhanced) blood-brain contrast, an anthropomorphic head phantom, and a porcine model with injection of fresh blood bolus. The visibility of ICH was characterized in terms of contrast-to-noise ratio (CNR) and qualitative evaluation of images by a neurosurgeon. Across a range of size and contrast of the ICH as well as radiation dose from the CBCT scan, the CNR was found to increase from ~2.2-3.7 for conventional filtered backprojection (FBP) to ~3.9-5.4 for PL at equivalent spatial resolution. The porcine model demonstrated superior ICH detectability for PL. The results support the role of high-quality mobile C-arm CBCT employing advanced reconstruction algorithms for detecting subtle complications in the operating room at lower radiation dose and lower cost than intraoperative CT scanners and/or fixedroom C-arms. Such capability could present a potentially valuable aid to patient safety and QA.
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Ortiz-Rodriguez, Andrés Ernesto; Ornelas, Juan Francisco; Ruiz-Sanchez, Eduardo
2018-05-01
The predominantly Asian tribe Miliuseae (Annonaceae) includes over 37 Neotropical species that are mainly distributed across Mesoamerica, from southern Mexico to northern Colombia. The tremendous ecological and morphological diversity of this clade, including ramiflory, cauliflory, flagelliflory, and clonality, suggests adaptive radiation. Despite the spectacular phenotypic divergence of this clade, little is known about its phylogenetic and evolutionary history. In this study we used a nuclear DNA marker and seven chloroplast markers, and maximum parsimony, maximum likelihood and Bayesian inference methods to reconstruct a comprehensive time-calibrated phylogeny of tribe Miliuseae, especially focusing on the Desmopsis-Stenanona clade. We also perform ancestral area reconstructions to infer the biogeographic history of this group. Finally, we use ecological niche modeling, lineage distribution models, and niche overlap tests to assess whether geographic isolation and ecological specialization influenced the diversification of lineages within this clade. We reconstructed a monophyletic Miliuseae that is divided into two strongly supported clades: (i) a Sapranthus-Tridimeris clade and (ii) a Desmopsis-Stenanona clade. The colonization of the Neotropics and subsequent diversification of Neotropical Miliuseae seems to have been associated with the expansion of the boreotropical forests during the late Eocene and their subsequent fragmentation and southern displacement. Further speciation within Neotropical Miliuseae out of the Maya block seems to have occurred during the last 15 million years. Lastly, the geographic structuring of major lineages of the Desmopsis-Stenanona clade seems to have followed a climatic gradient, supporting the hypothesis that morphological differentiation between closely related species resulted from both long-term isolation between geographic ranges and adaptation to environmental conditions. Copyright © 2018 Elsevier Inc. All rights reserved.
Shiyanbola, Oyewale O.; Sprague, Brian L.; Hampton, John M.; Dittus, Kim; James, Ted A.; Herschorn, Sally; Gangnon, Ronald E.; Weaver, Donald L.; Trentham-Dietz, Amy
2016-01-01
BACKGROUND The use of surgery and radiation therapy in treating ductal carcinoma in situ (DCIS) is directed by treatment guidelines and evidence from research. We sought to investigate recent patterns in DCIS treatment by demographic factors. METHODS Data for women diagnosed with DCIS between 1998 and 2011 (n = 416,232) in the National Cancer Data Base were assessed for trends in treatment patterns by age group, calendar year, ancestral/ethnic group and geographic region. The likelihood of receiving specific treatment modalities was analyzed using multivariable logistic regression. RESULTS DCIS cases were most frequently treated with breast conserving surgery (BCS) and adjuvant radiation (45.6%). After an initial rise, the use of adjuvant radiation following BCS plateaued at around 70% after 2007, with increasing utilization of mastectomy beyond 2005. Additionally, there was an increasing trend in post-mastectomy reconstruction over time, and women of African ancestry (odds ratio, 0.69; 95% confidence interval,0.66–0.72) and Hispanic women were less likely to undergo reconstruction (odds ratio, 0.83; 95% confidence interval, 0.78–0.89) compared to women of European ancestry. A similar trend was observed in contralateral risk reducing mastectomy utilization, with women of European ancestry having a more rapid rise in the utilization of contralateral risk reducing mastectomy among all ancestral/ethnic groups. CONCLUSION Recent trends demonstrate a plateau in radiation therapy administration following BCS, with increasing utilization of mastectomy, reconstruction and contralateral risk reducing mastectomy. There are substantial differences in treatment utilization according to ancestry/ethnicity and geographical region. Further studies examining patient-physician decision making surrounding DCIS treatment are warranted. PMID:27244699
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Draborg, Eva; Andersen, Christian Kronborg
2006-01-01
Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroll, Florian; Karsch, Leonhard; Pawelke, Jörg
2013-08-15
Purpose: Clinical QA in teletherapy as well as the characterization of experimental radiation sources for future medical applications requires effective methods for measuring three-dimensional (3D) dose distributions generated in a water-equivalent medium. Current dosimeters based on ionization chambers, diodes, thermoluminescence detectors, radiochromic films, or polymer gels exhibit various drawbacks: High quality 3D dose determination is either very sophisticated and expensive or requires high amounts of effort and time for the preparation or read out. New detectors based on scintillator blocks in combination with optical tomography are studied, since they have the potential to facilitate the desired cost-effective, transportable, and long-termmore » stable dosimetry system that is able to determine 3D dose distributions with high spatial resolution in a short time.Methods: A portable detector prototype was set up based on a plastic scintillator block and four digital cameras. During irradiation the scintillator emits light, which is detected by the fixed cameras. The light distribution is then reconstructed by optical tomography, using maximum-likelihood expectation maximization. The result of the reconstruction approximates the 3D dose distribution. First performance tests of the prototype using laser light were carried out. Irradiation experiments were performed with ionizing radiation, i.e., bremsstrahlung (6 to 21 MV), electrons (6 to 21 MeV), and protons (68 MeV), provided by clinical and research accelerators.Results: Laser experiments show that the current imaging properties differ from the design specifications: The imaging scale of the optical systems is position dependent, ranging from 0.185 mm/pixel to 0.225 mm/pixel. Nevertheless, the developed dosimetry method is proven to be functional for electron and proton beams. Induced radiation doses of 50 mGy or more made 3D dose reconstructions possible. Taking the imaging properties into account, determined dose profiles are in agreement with reference measurements. An inherent drawback of the scintillator is the nonlinear light output for high stopping-power radiation due to the quenching effect. It impacts the depth dose curves measured with the dosimeter. For single Bragg peak distributions this leads to a peak to plateau ratio of 2.8 instead of 4.5 for the reference ionization chamber measurement. Furthermore, the transmission of the clinical bremsstrahlung beams through the scintillator leads to the saturation of one camera, making dose reconstructions in that case presently not feasible.Conclusions: It is shown that distributions of scintillation light generated by proton or electron beams can be reconstructed by the dosimetry system within minutes. The quenching apparent for proton irradiation, and the yet not precisely determined position dependency of the imaging scale, require further investigation and corrections. Upgrading the prototype with larger or inorganic scintillators would increase the detectable proton and electron energy range. The presented results show that the determination of 3D dose distributions using scintillator blocks and optical tomography is a promising dosimetry method.« less
Cummins, Carla A; McInerney, James O
2011-12-01
Current phylogenetic methods attempt to account for evolutionary rate variation across characters in a matrix. This is generally achieved by the use of sophisticated evolutionary models, combined with dense sampling of large numbers of characters. However, systematic biases and superimposed substitutions make this task very difficult. Model adequacy can sometimes be achieved at the cost of adding large numbers of free parameters, with each parameter being optimized according to some criterion, resulting in increased computation times and large variances in the model estimates. In this study, we develop a simple approach that estimates the relative evolutionary rate of each homologous character. The method that we describe uses the similarity between characters as a proxy for evolutionary rate. In this article, we work on the premise that if the character-state distribution of a homologous character is similar to many other characters, then this character is likely to be relatively slowly evolving. If the character-state distribution of a homologous character is not similar to many or any of the rest of the characters in a data set, then it is likely to be the result of rapid evolution. We show that in some test cases, at least, the premise can hold and the inferences are robust. Importantly, the method does not use a "starting tree" to make the inference and therefore is tree independent. We demonstrate that this approach can work as well as a maximum likelihood (ML) approach, though the ML method needs to have a known phylogeny, or at least a very good estimate of that phylogeny. We then demonstrate some uses for this method of analysis, including the improvement in phylogeny reconstruction for both deep-level and recent relationships and overcoming systematic biases such as base composition bias. Furthermore, we compare this approach to two well-established methods for reweighting or removing characters. These other methods are tree-based and we show that they can be systematically biased. We feel this method can be useful for phylogeny reconstruction, understanding evolutionary rate variation, and for understanding selection variation on different characters.
NASA Astrophysics Data System (ADS)
Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Huang, Chiung-Yu; Qin, Jing
2013-01-01
The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265
Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.
2016-01-01
Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.
Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo
2018-05-03
Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Maximum-likelihood estimation of parameterized wavefronts from multifocal data
Sakamoto, Julia A.; Barrett, Harrison H.
2012-01-01
A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282
Blackman, Arne V.; Grabuschnig, Stefan; Legenstein, Robert; Sjöström, P. Jesper
2014-01-01
Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH). An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI) stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP) forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP), however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate due to the occasional poor outcome of histological processing. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost. PMID:25071470
Campos-Filho, N; Franco, E L
1989-02-01
A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abulencia, A.; Acosta, D.; Adelman, Jahred A.
2006-02-01
The authors describe a measurement of the top quark mass from events produced in p{bar p} collisions at a center-of-mass energy of 1.96 TeV, using the Collider Detector at Fermilab. They identify t{bar t} candidates where both W bosons from the top quarks decay into leptons (e{nu}, {mu}{nu}, or {tau}{nu}) from a data sample of 360 pb{sup -1}. The top quark mass is reconstructed in each event separately by three different methods, which draw upon simulated distributions of the neutrino pseudorapidity, t{bar t} longitudinal momentum, or neutrino azimuthal angle in order to extract probability distributions for the top quark mass.more » For each method, representative mass distributions, or templates, are constructed from simulated samples of signal and background events, and parameterized to form continuous probability density functions. A likelihood fit incorporating these parameterized templates is then performed on the data sample masses in order to derive a final top quark mass. Combining the three template methods, taking into account correlations in their statistical and systematic uncertainties, results in a top quark mass measurement of 170.1 {+-} 6.0(stat.) {+-} 4.1(syst.) GeV/c{sup 2}.« less
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Jha, Abhinav K; Barrett, Harrison H; Frey, Eric C; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A
2015-09-21
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
NASA Astrophysics Data System (ADS)
Jha, Abhinav K.; Barrett, Harrison H.; Frey, Eric C.; Clarkson, Eric; Caucci, Luca; Kupinski, Matthew A.
2015-09-01
Recent advances in technology are enabling a new class of nuclear imaging systems consisting of detectors that use real-time maximum-likelihood (ML) methods to estimate the interaction position, deposited energy, and other attributes of each photon-interaction event and store these attributes in a list format. This class of systems, which we refer to as photon-processing (PP) nuclear imaging systems, can be described by a fundamentally different mathematical imaging operator that allows processing of the continuous-valued photon attributes on a per-photon basis. Unlike conventional photon-counting (PC) systems that bin the data into images, PP systems do not have any binning-related information loss. Mathematically, while PC systems have an infinite-dimensional null space due to dimensionality considerations, PP systems do not necessarily suffer from this issue. Therefore, PP systems have the potential to provide improved performance in comparison to PC systems. To study these advantages, we propose a framework to perform the singular-value decomposition (SVD) of the PP imaging operator. We use this framework to perform the SVD of operators that describe a general two-dimensional (2D) planar linear shift-invariant (LSIV) PP system and a hypothetical continuously rotating 2D single-photon emission computed tomography (SPECT) PP system. We then discuss two applications of the SVD framework. The first application is to decompose the object being imaged by the PP imaging system into measurement and null components. We compare these components to the measurement and null components obtained with PC systems. In the process, we also present a procedure to compute the null functions for a PC system. The second application is designing analytical reconstruction algorithms for PP systems. The proposed analytical approach exploits the fact that PP systems acquire data in a continuous domain to estimate a continuous object function. The approach is parallelizable and implemented for graphics processing units (GPUs). Further, this approach leverages another important advantage of PP systems, namely the possibility to perform photon-by-photon real-time reconstruction. We demonstrate the application of the approach to perform reconstruction in a simulated 2D SPECT system. The results help to validate and demonstrate the utility of the proposed method and show that PP systems can help overcome the aliasing artifacts that are otherwise intrinsically present in PC systems.
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Multisensor fusion for 3D target tracking using track-before-detect particle filter
NASA Astrophysics Data System (ADS)
Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.
2015-05-01
This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.
ERIC Educational Resources Information Center
Blackwood, Larry G.; Bradley, Edwin L.
1989-01-01
Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)
Dual-Particle Imaging System with Neutron Spectroscopy for Safeguard Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, Michael C.; Weber, Thomas M.
2017-11-01
A dual-particle imager (DPI) has been designed that is capable of detecting gamma-ray and neutron signatures from shielded SNM. The system combines liquid organic and NaI(Tl) scintillators to form a combined Compton and neutron scatter camera. Effective image reconstruction of detected particles is a crucial component for maximizing the performance of the system; however, a key deficiency exists in the widely used iterative list-mode maximum-likelihood estimation-maximization (MLEM) image reconstruction technique. For MLEM a stopping condition is required to achieve a good quality solution but these conditions fail to achieve maximum image quality. Stochastic origin ensembles (SOE) imaging is a goodmore » candidate to address this problem as it uses Markov chain Monte Carlo to reach a stochastic steady-state solution. The application of SOE to the DPI is presented in this work.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murase, Kenya, E-mail: murase@sahs.med.osaka-u.ac.jp; Song, Ruixiao; Hiratsuka, Samu
We investigated the feasibility of visualizing blood coagulation using a system for magnetic particle imaging (MPI). A magnetic field-free line is generated using two opposing neodymium magnets and transverse images are reconstructed from the third-harmonic signals received by a gradiometer coil, using the maximum likelihood-expectation maximization algorithm. Our MPI system was used to image the blood coagulation induced by adding CaCl{sub 2} to whole sheep blood mixed with magnetic nanoparticles (MNPs). The “MPI value” was defined as the pixel value of the transverse image reconstructed from the third-harmonic signals. MPI values were significantly smaller for coagulated blood samples than thosemore » without coagulation. We confirmed the rationale of these results by calculating the third-harmonic signals for the measured viscosities of samples, with an assumption that the magnetization and particle size distribution of MNPs obey the Langevin equation and log-normal distribution, respectively. We concluded that MPI can be useful for visualizing blood coagulation.« less
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Staggemeier, Vanessa Graziele; Diniz-Filho, José Alexandre Felizola; Forest, Félix; Lucas, Eve
2015-01-01
Background and Aims Myrcia section Aulomyrcia includes ∼120 species that are endemic to the Neotropics and disjunctly distributed in the moist Amazon and Atlantic coastal forests of Brazil. This paper presents the first comprehensive phylogenetic study of this group and this phylogeny is used as a basis to evaluate recent classification systems and to test alternative hypotheses associated with the history of this clade. Methods Fifty-three taxa were sampled out of the 120 species currently recognized, plus 40 outgroup taxa, for one nuclear marker (ribosomal internal transcribed spacer) and four plastid markers (psbA-trnH, trnL-trnF, trnQ-rpS16 and ndhF). The relationships were reconstructed based on Bayesian and maximum likelihood analyses. Additionally, a likelihood approach, ‘geographic state speciation and extinction’, was used to estimate region- dependent rates of speciation, extinction and dispersal, comparing historically climatic stable areas (refugia) and unstable areas. Key Results Maximum likelihood and Bayesian inferences indicate that Myrcia and Marlierea are polyphyletic, and the internal groupings recovered are characterized by combinations of morphological characters. Phylogenetic relationships support a link between Amazonian and north-eastern species and between north-eastern and south-eastern species. Lower extinction rates within glacial refugia suggest that these areas were important in maintaining diversity in the Atlantic forest biodiversity hotspot. Conclusions This study provides a robust phylogenetic framework to address important ecological questions for Myrcia s.l. within an evolutionary context, and supports the need to unite taxonomically the two traditional genera Myrcia and Marlierea in an expanded Myrcia s.l. Furthermore, this study offers valuable insights into the diversification of plant species in the highly impacted Atlantic forest of South America; evidence is presented that the lowest extinction rates are found inside refugia and that range expansion from unstable areas contributes to the highest levels of plant diversity in the Bahian refugium. PMID:25757471
[Application of Fourier transform profilometry in 3D-surface reconstruction].
Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing
2011-08-01
With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.
Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.
Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping
2017-06-27
Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.
Dai, Qing-Yan; Gao, Qiang; Wu, Chun-Sheng; Chesters, Douglas; Zhu, Chao-Dong; Zhang, Ai-Bing
2012-01-01
Unlike distinct species, closely related species offer a great challenge for phylogeny reconstruction and species identification with DNA barcoding due to their often overlapping genetic variation. We tested a sibling species group of pine moth pests in China with a standard cytochrome c oxidase subunit I (COI) gene and two alternative internal transcribed spacer (ITS) genes (ITS1 and ITS2). Five different phylogenetic/DNA barcoding analysis methods (Maximum likelihood (ML)/Neighbor-joining (NJ), “best close match” (BCM), Minimum distance (MD), and BP-based method (BP)), representing commonly used methodology (tree-based and non-tree based) in the field, were applied to both single-gene and multiple-gene analyses. Our results demonstrated clear reciprocal species monophyly for three relatively distant related species, Dendrolimus superans, D. houi, D. kikuchii, as recovered by both single and multiple genes while the phylogenetic relationship of three closely related species, D. punctatus, D. tabulaeformis, D. spectabilis, could not be resolved with the traditional tree-building methods. Additionally, we find the standard COI barcode outperforms two nuclear ITS genes, whatever the methods used. On average, the COI barcode achieved a success rate of 94.10–97.40%, while ITS1 and ITS2 obtained a success rate of 64.70–81.60%, indicating ITS genes are less suitable for species identification in this case. We propose the use of an overall success rate of species identification that takes both sequencing success and assignation success into account, since species identification success rates with multiple-gene barcoding system were generally overestimated, especially by tree-based methods, where only successfully sequenced DNA sequences were used to construct a phylogenetic tree. Non-tree based methods, such as MD, BCM, and BP approaches, presented advantages over tree-based methods by reporting the overall success rates with statistical significance. In addition, our results indicate that the most closely related species D. punctatus, D. tabulaeformis, and D. spectabilis, may be still in the process of incomplete lineage sorting, with occasional hybridizations occurring among them. PMID:22509245
Level-set-based reconstruction algorithm for EIT lung images: first clinical results.
Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy
2012-05-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.
Unified framework to evaluate panmixia and migration direction among multiple sampling locations.
Beerli, Peter; Palczewski, Michal
2010-05-01
For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Joint image and motion reconstruction for PET using a B-spline motion model.
Blume, Moritz; Navab, Nassir; Rafecas, Magdalena
2012-12-21
We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.
Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.
Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G
2015-01-01
A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fatakia, Sarosh Noshir
2005-01-01
This dissertation describes a measurement of the mass of the top quark using events consistent with the hypothesis t¯t → bW + ¯bW - → bl +ν¯bl -¯ν, where (l=e,μ). The events are obtained from nearly 230 pb -1 of p¯p collision data collected by the DØ experiment between 2002 and 2004 during Run II. In this decay channel two neutrinos remain undetected. Extraction of the mass of the top quark by kinematic reconstruction is not possible because the event is under-constrained. Therefore, a dynamical likelihood method is developed to obtain the mass of the top quark. The mass ofmore » top quark obtained from the candidate events selected in the di-electron channel and the eμ channel is: 154.1 +14.2 -12.8(stat.) ±6.6 (syst.) GeV.« less
Separating endogenous ancient DNA from modern day contamination in a Siberian Neandertal
Skoglund, Pontus; Northoff, Bernd H.; Shunkov, Michael V.; Derevianko, Anatoli P.; Pääbo, Svante; Krause, Johannes; Jakobsson, Mattias
2014-01-01
One of the main impediments for obtaining DNA sequences from ancient human skeletons is the presence of contaminating modern human DNA molecules in many fossil samples and laboratory reagents. However, DNA fragments isolated from ancient specimens show a characteristic DNA damage pattern caused by miscoding lesions that differs from present day DNA sequences. Here, we develop a framework for evaluating the likelihood of a sequence originating from a model with postmortem degradation—summarized in a postmortem degradation score—which allows the identification of DNA fragments that are unlikely to originate from present day sources. We apply this approach to a contaminated Neandertal specimen from Okladnikov Cave in Siberia to isolate its endogenous DNA from modern human contaminants and show that the reconstructed mitochondrial genome sequence is more closely related to the variation of Western Neandertals than what was discernible from previous analyses. Our method opens up the potential for genomic analysis of contaminated fossil material. PMID:24469802
Callejón, Rocío; Robles, María Del Rosario; Panei, Carlos Javier; Cutillas, Cristina
2016-08-01
A molecular phylogenetic hypothesis is presented for the genus Trichuris based on sequence data from mitochondrial cytochrome c oxidase 1 (cox1) and cytochrome b (cob). The taxa consisted of nine populations of whipworm from five species of Sigmodontinae rodents from Argentina. Bayesian Inference, Maximum Parsimony, and Maximum Likelihood methods were used to infer phylogenies for each gene separately but also for the combined mitochondrial data and the combined mitochondrial and nuclear dataset. Phylogenetic results based on cox1 and cob mitochondrial DNA (mtDNA) revealed three clades strongly resolved corresponding to three different species (Trichuris navonae, Trichuris bainae, and Trichuris pardinasi) showing phylogeographic variation, but relationships among Trichuris species were poorly resolved. Phylogenetic reconstruction based on concatenated sequences had greater phylogenetic resolution for delimiting species and populations intra-specific of Trichuris than those based on partitioned genes. Thus, populations of T. bainae and T. pardinasi could be affected by geographical factors and co-divergence parasite-host.
Moreno-Letelier, Alejandra; Olmedo, Gabriela; Eguiarte, Luis E.; Martinez-Castilla, Leon; Souza, Valeria
2011-01-01
The high affinity phosphate transport system (pst) is crucial for phosphate uptake in oligotrophic environments. Cuatro Cienegas Basin (CCB) has extremely low P levels and its endemic Bacillus are closely related to oligotrophic marine Firmicutes. Thus, we expected the pst operon of CCB to share the same evolutionary history and protein similarity to marine Firmicutes. Orthologs of the pst operon were searched in 55 genomes of Firmicutes and 13 outgroups. Phylogenetic reconstructions were performed for the pst operon and 14 concatenated housekeeping genes using maximum likelihood methods. Conserved domains and 3D structures of the phosphate-binding protein (PstS) were also analyzed. The pst operon of Firmicutes shows two highly divergent clades with no correlation to the type of habitat nor a phylogenetic congruence, suggesting horizontal gene transfer. Despite sequence divergence, the PstS protein had a similar 3D structure, which could be due to parallel evolution after horizontal gene transfer events. PMID:21461370
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
NASA Technical Reports Server (NTRS)
Heinemann, K.
1987-01-01
The detection and size analysis of small metal particles supported on amorphous substrates becomes increasingly difficult when the particle size approaches that of the phase contrast background structures of the support. An approach of digital image analysis, involving Fourier transformation of the original image, filtering, and image reconstruction was studied with respect to the likelihood of unambiguously detecting particles of less than 1 nm diameter on amorphous substrates from a single electron micrograph.
NASA Astrophysics Data System (ADS)
Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.
1998-04-01
Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with
tracer, and also using experimentally acquired data with
tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for
image reconstruction).
Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.
Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei
2017-04-01
There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
Reconstruction of interaction rate in holographic dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less
An iterative algorithm for soft tissue reconstruction from truncated flat panel projections
NASA Astrophysics Data System (ADS)
Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.
2006-03-01
The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.
Gregory, Lilian; Carrillo Gaeta, Natália; Araújo, Jansen; Matsumiya Thomazelli, Luciano; Harakawa, Ricardo; Ikuno, Alice A; Hiromi Okuda, Liria; de Stefano, Eliana; Pituco, Edviges Maristela
2017-12-01
Enzootic bovine leucosis (EBL) is a silent disease caused by a retrovirus [bovine leukaemia virus (BLV)]. BLV is classified into almost 10 genotypes that are distributed in several countries. The present research aimed to describe two BLV gp51 env sequences of strains detected in the state of São Paulo, Brazil and perform a phylogenetic analysis to compare them to other BLV gp51 env sequences of strains around the world. Two bovines from different herds were admitted to the Bovine and Small Ruminant Hospital, School of Veterinary Medicine and Animal Science, University of São Paulo, Brazil. In both, lymphosarcoma was detected and the presence of BLV was confirmed by nested PCR. The neighbour-joining algorithm distance method was used to genotype the BLV sequences by phylogenetic reconstruction, and the maximum likelihood method was used for the phylogenetic reconstruction. The phylogeny estimates were calculated by performing 1000 bootstrap replicates. Analysis of the partial envelope glycoprotein (env) gene sequences from two isolates (25 and 31) revealed two different genotypes of BLV. Isolate 25 clustered with ten genotype 6 isolates from Brazil, Argentina, Thailand and Paraguay. On the other hand, isolate 31 clustered with two genotype 5 isolates (one was also from São Paulo and one was from Costa Rica). The detected genotypes corroborate the results of previous studies conducted in the state of São Paulo, Brazil. The prediction of amino acids showed substitutions, particularly between positions 136 and 150 in 11 out of 13 sequences analysed, including sequences from GenBank. BLV is still important in Brazil and this research should be continued.
Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A
2017-01-01
Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.
A Population-Structured HIV Epidemic in Israel: Roles of Risk and Ethnicity
Grossman, Zehava; Avidor, Boaz; Mor, Zohar; Chowers, Michal; Levy, Itzchak; Shahar, Eduardo; Riesenberg, Klaris; Sthoeger, Zev; Maayan, Shlomo; Shao, Wei; Lorber, Margalit; Olstein-Pops, Karen; Elbirt, Daniel; Elinav, Hila; Asher, Ilan; Averbuch, Diana; Istomin, Valery; Gottesman, Bat Sheva; Kedem, Eynat; Girshengorn, Shirley; Kra-Oz, Zipi; Shemer Avni, Yonat; Radian Sade, Sara; Turner, Dan; Maldarelli, Frank
2015-01-01
Background HIV in Israel started with a subtype-B epidemic among men who have sex with men, followed in the 1980s and 1990s by introductions of subtype C from Ethiopia (predominantly acquired by heterosexual transmission) and subtype A from the former Soviet Union (FSU, most often acquired by intravenous drug use). The epidemic matured over the last 15 years without additional large influx of exogenous infections. Between 2005 and 2013 the number of infected men who have sex with men (MSM) increased 2.9-fold, compared to 1.6-fold and 1.3-fold for intravenous drug users (IVDU) and Ethiopian-origin residents. Understanding contemporary spread is essential for effective public health planning. Methods We analyzed demographic and virologic data from 1,427 HIV-infected individuals diagnosed with HIV-I during 1998–2012. HIV phylogenies were reconstructed with maximum-likelihood and Bayesian methods. Results Subtype-B viruses, but not A or C, demonstrated a striking number of large clusters with common ancestors having posterior probability ≥0.95, including some suggesting presence of transmission networks. Transmitted drug resistance was highest in subtype B (13%). MSM represented a frequent risk factor in cross-ethnic transmission, demonstrated by the presence of Israeli-born with non-B virus infections and FSU immigrants with non-A subtypes. Conclusions Reconstructed phylogenetic trees demonstrated substantial grouping in subtype B, but not in non-MSM subtype-A or in subtype-C, reflecting differences in transmission dynamics linked to HIV transmission categories. Cross-ethnic spread occurred through multiple independent introductions, with MSM playing a prevalent role in the transmission of the virus. Such data provide a baseline to track epidemic trends and will be useful in informing and quantifying efforts to reduce HIV transmission. PMID:26302493
Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions
Barrett, Harrison H.; Dainty, Christopher; Lara, David
2008-01-01
Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan
2016-04-28
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less
Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost
2016-01-01
The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167
Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate.
Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan
2017-07-24
Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. 'scatter-tails'. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the 'scatter-tails'. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68 Ga-PSMA scan, and 23 whole-body 18 F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical 'halo' artifacts that are often observed in the vicinity of high focal uptake regions.
Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.
López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R
2014-01-01
The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mahon, D. F.; Clarkson, A.; Hamilton, D. J.; Hoek, M.; Ireland, D. G.; Johnstone, J. R.; Kaiser, R.; Keri, T.; Lumsden, S.; McKinnon, B.; Murray, M.; Nutbeam-Tuffs, S.; Shearer, C.; Staines, C.; Yang, G.; Zimmerman, C.
2013-12-01
Cosmic-ray muons are highly penetrative charged particles observed at sea level with a flux of approximately 1 cm-2 min-1. They interact with matter primarily through Coulomb scattering which can be exploited in muon tomography to image objects within industrial nuclear waste containers. A prototype scintillating-fibre detector has been developed for this application, consisting of two tracking modules above and below the volume to be assayed. Each module comprises two orthogonal planes of 2 mm fibres. The modular configuration allows the reconstruction of the initial and scattered muon trajectories which enable the container content, with respect to atomic number Z, to be determined. Fibre signals are read out by Hamamatsu H8500 MAPMTs with two fibres coupled to each pixel via dedicated pairing schemes developed to avoid space point ambiguities and retain the high spatial resolution of the fibres. A likelihood-based image reconstruction algorithm was developed and tested using a GEANT4 simulation of the prototype system. Images reconstructed from this simulation are presented in comparison with experimental results taken with test objects. These results verify the simulation and show discrimination between the low, medium and high-Z materials imaged.
Interior reconstruction method based on rotation-translation scanning model.
Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian
2014-01-01
In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.
Assessing compatibility of direct detection data: halo-independent global likelihood analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2016-10-18
We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less
Forensic Facial Reconstruction: The Final Frontier.
Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan
2015-09-01
Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.
Kroll, Florian; Pawelke, Jörg; Karsch, Leonhard
2013-08-01
Clinical QA in teletherapy as well as the characterization of experimental radiation sources for future medical applications requires effective methods for measuring three-dimensional (3D) dose distributions generated in a water-equivalent medium. Current dosimeters based on ionization chambers, diodes, thermoluminescence detectors, radiochromic films, or polymer gels exhibit various drawbacks: High quality 3D dose determination is either very sophisticated and expensive or requires high amounts of effort and time for the preparation or read out. New detectors based on scintillator blocks in combination with optical tomography are studied, since they have the potential to facilitate the desired cost-effective, transportable, and long-term stable dosimetry system that is able to determine 3D dose distributions with high spatial resolution in a short time. A portable detector prototype was set up based on a plastic scintillator block and four digital cameras. During irradiation the scintillator emits light, which is detected by the fixed cameras. The light distribution is then reconstructed by optical tomography, using maximum-likelihood expectation maximization. The result of the reconstruction approximates the 3D dose distribution. First performance tests of the prototype using laser light were carried out. Irradiation experiments were performed with ionizing radiation, i.e., bremsstrahlung (6 to 21 MV), electrons (6 to 21 MeV), and protons (68 MeV), provided by clinical and research accelerators. Laser experiments show that the current imaging properties differ from the design specifications: The imaging scale of the optical systems is position dependent, ranging from 0.185 mm/pixel to 0.225 mm/pixel. Nevertheless, the developed dosimetry method is proven to be functional for electron and proton beams. Induced radiation doses of 50 mGy or more made 3D dose reconstructions possible. Taking the imaging properties into account, determined dose profiles are in agreement with reference measurements. An inherent drawback of the scintillator is the nonlinear light output for high stopping-power radiation due to the quenching effect. It impacts the depth dose curves measured with the dosimeter. For single Bragg peak distributions this leads to a peak to plateau ratio of 2.8 instead of 4.5 for the reference ionization chamber measurement. Furthermore, the transmission of the clinical bremsstrahlung beams through the scintillator leads to the saturation of one camera, making dose reconstructions in that case presently not feasible. It is shown that distributions of scintillation light generated by proton or electron beams can be reconstructed by the dosimetry system within minutes. The quenching apparent for proton irradiation, and the yet not precisely determined position dependency of the imaging scale, require further investigation and corrections. Upgrading the prototype with larger or inorganic scintillators would increase the detectable proton and electron energy range. The presented results show that the determination of 3D dose distributions using scintillator blocks and optical tomography is a promising dosimetry method.
Liang, Xiaoping; Zhang, Qizhi; Jiang, Huabei
2006-11-10
We show that a two-step reconstruction method can be adapted to improve the quantitative accuracy of the refractive index reconstruction in phase-contrast diffuse optical tomography (PCDOT). We also describe the possibility of imaging tissue glucose concentration with PCDOT. In this two-step method, we first use our existing finite-element reconstruction algorithm to recover the position and shape of a target. We then use the position and size of the target as a priori information to reconstruct a single value of the refractive index within the target and background regions using a region reconstruction method. Due to the extremely low contrast available in the refractive index reconstruction, we incorporate a data normalization scheme into the two-step reconstruction to combat the associated low signal-to-noise ratio. Through a series of phantom experiments we find that this two-step reconstruction method can considerably improve the quantitative accuracy of the refractive index reconstruction. The results show that the relative error of the reconstructed refractive index is reduced from 20% to within 1.5%. We also demonstrate the possibility of PCDOT for recovering glucose concentration using these phantom experiments.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
Optical tomography by means of regularized MLEM
NASA Astrophysics Data System (ADS)
Majer, Charles L.; Urbanek, Tina; Peter, Jörg
2015-09-01
To solve the inverse problem involved in fluorescence mediated tomography a regularized maximum likelihood expectation maximization (MLEM) reconstruction strategy is proposed. This technique has recently been applied to reconstruct galaxy clusters in astronomy and is adopted here. The MLEM algorithm is implemented as Richardson-Lucy (RL) scheme and includes entropic regularization and a floating default prior. Hence, the strategy is very robust against measurement noise and also avoids converging into noise patterns. Normalized Gaussian filtering with fixed standard deviation is applied for the floating default kernel. The reconstruction strategy is investigated using the XFM-2 homogeneous mouse phantom (Caliper LifeSciences Inc., Hopkinton, MA) with known optical properties. Prior to optical imaging, X-ray CT tomographic data of the phantom were acquire to provide structural context. Phantom inclusions were fit with various fluorochrome inclusions (Cy5.5) for which optical data at 60 projections over 360 degree have been acquired, respectively. Fluorochrome excitation has been accomplished by scanning laser point illumination in transmission mode (laser opposite to camera). Following data acquisition, a 3D triangulated mesh is derived from the reconstructed CT data which is then matched with the various optical projection images through 2D linear interpolation, correlation and Fourier transformation in order to assess translational and rotational deviations between the optical and CT imaging systems. Preliminary results indicate that the proposed regularized MLEM algorithm, when driven with a constant initial condition, yields reconstructed images that tend to be smoother in comparison to classical MLEM without regularization. Once the floating default prior is included this bias was significantly reduced.
Automatic system for 3D reconstruction of the chick eye based on digital photographs.
Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L
2012-01-01
The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.
Prompt gamma ray imaging for verification of proton boron fusion therapy: A Monte Carlo study.
Shin, Han-Back; Yoon, Do-Kun; Jung, Joo-Young; Kim, Moo-Sub; Suh, Tae Suk
2016-10-01
The purpose of this study was to verify acquisition feasibility of a single photon emission computed tomography image using prompt gamma rays for proton boron fusion therapy (PBFT) and to confirm an enhanced therapeutic effect of PBFT by comparison with conventional proton therapy without use of boron. Monte Carlo simulation was performed to acquire reconstructed image during PBFT. We acquired percentage depth dose (PDD) of the proton beams in a water phantom, energy spectrum of the prompt gamma rays, and tomographic images, including the boron uptake region (BUR; target). The prompt gamma ray image was reconstructed using maximum likelihood expectation maximisation (MLEM) with 64 projection raw data. To verify the reconstructed image, both an image profile and contrast analysis according to the iteration number were conducted. In addition, the physical distance between two BURs in the region of interest of each BUR was measured. The PDD of the proton beam from the water phantom including the BURs shows more efficient than that of conventional proton therapy on tumour region. A 719keV prompt gamma ray peak was clearly observed in the prompt gamma ray energy spectrum. The prompt gamma ray image was reconstructed successfully using 64 projections. Different image profiles including two BURs were acquired from the reconstructed image according to the iteration number. We confirmed successful acquisition of a prompt gamma ray image during PBFT. In addition, the quantitative image analysis results showed relatively good performance for further study. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Muon reconstruction with a geometrical model in JUNO
NASA Astrophysics Data System (ADS)
Genster, C.; Schever, M.; Ludhova, L.; Soiron, M.; Stahl, A.; Wiebusch, C.
2018-03-01
The Jiangmen Neutrino Underground Observatory (JUNO) is a 20 kton liquid scintillator detector currently under construction near Kaiping in China. The physics program focuses on the determination of the neutrino mass hierarchy with reactor anti-neutrinos. For this purpose, JUNO is located 650 m underground with a distance of 53 km to two nuclear power plants. As a result, it is exposed to a muon flux that requires a precise muon reconstruction to make a veto of cosmogenic backgrounds viable. Established muon tracking algorithms use time residuals to a track hypothesis. We developed an alternative muon tracking algorithm that utilizes the geometrical shape of the fastest light. It models the full shape of the first, direct light produced along the muon track. From the intersection with the spherical PMT array, the track parameters are extracted with a likelihood fit. The algorithm finds a selection of PMTs based on their first hit times and charges. Subsequently, it fits on timing information only. On a sample of through-going muons with a full simulation of readout electronics, we report a spatial resolution of 20 cm of distance from the detector's center and an angular resolution of 1.6o over the whole detector. Additionally, a dead time estimation is performed to measure the impact of the muon veto. Including the step of waveform reconstruction on top of the track reconstruction, a loss in exposure of only 4% can be achieved compared to the case of a perfect tracking algorithm. When including only the PMT time resolution, but no further electronics simulation and waveform reconstruction, the exposure loss is only 1%.
A generalized reconstruction framework for unconventional PET systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathews, Aswin John, E-mail: amathews@wustl.edu; Li, Ke; O’Sullivan, Joseph A.
2015-08-15
Purpose: Quantitative estimation of the radionuclide activity concentration in positron emission tomography (PET) requires precise modeling of PET physics. The authors are focused on designing unconventional PET geometries for specific applications. This work reports the creation of a generalized reconstruction framework, capable of reconstructing tomographic PET data for systems that use right cuboidal detector elements positioned at arbitrary geometry using a regular Cartesian grid of image voxels. Methods: The authors report on a variety of design choices and optimization for the creation of the generalized framework. The image reconstruction algorithm is maximum likelihood-expectation–maximization. System geometry can be specified using amore » simple script. Given the geometry, a symmetry seeking algorithm finds existing symmetry in the geometry with respect to the image grid to improve the memory usage/speed. Normalization is approached from a geometry independent perspective. The system matrix is computed using the Siddon’s algorithm and subcrystal approach. The program is parallelized through open multiprocessing and message passing interface libraries. A wide variety of systems can be modeled using the framework. This is made possible by modeling the underlying physics and data correction, while generalizing the geometry dependent features. Results: Application of the framework for three novel PET systems, each designed for a specific application, is presented to demonstrate the robustness of the framework in modeling PET systems of unconventional geometry. Three PET systems of unconventional geometry are studied. (1) Virtual-pinhole half-ring insert integrated into Biograph-40: although the insert device improves image quality over conventional whole-body scanner, the image quality varies depending on the position of the insert and the object. (2) Virtual-pinhole flat-panel insert integrated into Biograph-40: preliminary results from an investigation into a modular flat-panel insert are presented. (3) Plant PET system: a reconfigurable PET system for imaging plants, with resolution of greater than 3.3 mm, is shown. Using the automated symmetry seeking algorithm, the authors achieved a compression ratio of the storage and memory requirement by a factor of approximately 50 for the half-ring and flat-panel systems. For plant PET system, the compression ratio is approximately five. The ratio depends on the level of symmetry that exists in different geometries. Conclusions: This work brings the field closer to arbitrary geometry reconstruction. A generalized reconstruction framework can be used to validate multiple hypotheses and the effort required to investigate each system is reduced. Memory usage/speed can be improved with certain optimizations.« less
A generalized reconstruction framework for unconventional PET systems
Mathews, Aswin John; Li, Ke; Komarov, Sergey; Wang, Qiang; Ravindranath, Bosky; O’Sullivan, Joseph A.; Tai, Yuan-Chuan
2015-01-01
Purpose: Quantitative estimation of the radionuclide activity concentration in positron emission tomography (PET) requires precise modeling of PET physics. The authors are focused on designing unconventional PET geometries for specific applications. This work reports the creation of a generalized reconstruction framework, capable of reconstructing tomographic PET data for systems that use right cuboidal detector elements positioned at arbitrary geometry using a regular Cartesian grid of image voxels. Methods: The authors report on a variety of design choices and optimization for the creation of the generalized framework. The image reconstruction algorithm is maximum likelihood-expectation–maximization. System geometry can be specified using a simple script. Given the geometry, a symmetry seeking algorithm finds existing symmetry in the geometry with respect to the image grid to improve the memory usage/speed. Normalization is approached from a geometry independent perspective. The system matrix is computed using the Siddon’s algorithm and subcrystal approach. The program is parallelized through open multiprocessing and message passing interface libraries. A wide variety of systems can be modeled using the framework. This is made possible by modeling the underlying physics and data correction, while generalizing the geometry dependent features. Results: Application of the framework for three novel PET systems, each designed for a specific application, is presented to demonstrate the robustness of the framework in modeling PET systems of unconventional geometry. Three PET systems of unconventional geometry are studied. (1) Virtual-pinhole half-ring insert integrated into Biograph-40: although the insert device improves image quality over conventional whole-body scanner, the image quality varies depending on the position of the insert and the object. (2) Virtual-pinhole flat-panel insert integrated into Biograph-40: preliminary results from an investigation into a modular flat-panel insert are presented. (3) Plant PET system: a reconfigurable PET system for imaging plants, with resolution of greater than 3.3 mm, is shown. Using the automated symmetry seeking algorithm, the authors achieved a compression ratio of the storage and memory requirement by a factor of approximately 50 for the half-ring and flat-panel systems. For plant PET system, the compression ratio is approximately five. The ratio depends on the level of symmetry that exists in different geometries. Conclusions: This work brings the field closer to arbitrary geometry reconstruction. A generalized reconstruction framework can be used to validate multiple hypotheses and the effort required to investigate each system is reduced. Memory usage/speed can be improved with certain optimizations. PMID:26233187
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
Anderson, Eric C; Ng, Thomas C
2016-02-01
We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Khachatryan, Vardan
2015-06-09
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5fb -1 collected by the CMS experiment at the LHC in pp collisions at a centre-of-mass energy of 8TeV. In order to separate the signal from the larger tt¯ + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. Themore » ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, μ, relative to the standard model prediction for a Higgs boson mass of 125GeV. The observed (expected) exclusion limit at a 95 % confidence level is μ < 4.2 (3.3), corresponding to a best fit value μ^ = 1.2 +1.6 -1.5.« less
Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Bergauer, T; Dragicevic, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hartl, C; Hörmann, N; Hrubec, J; Jeitler, M; Kiesenhofer, W; Knünz, V; Krammer, M; Krätschmer, I; Liko, D; Mikulec, I; Rabady, D; Rahbaran, B; Rohringer, H; Schöfbeck, R; Strauss, J; Treberer-Treberspurg, W; Waltenberger, W; Wulz, C-E; Mossolov, V; Shumeiko, N; Suarez Gonzalez, J; Alderweireldt, S; Bansal, S; Cornelis, T; De Wolf, E A; Janssen, X; Knutsson, A; Lauwers, J; Luyckx, S; Ochesanu, S; Rougny, R; Van De Klundert, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Blekman, F; Blyweert, S; D'Hondt, J; Daci, N; Heracleous, N; Keaveney, J; Lowette, S; Maes, M; Olbrechts, A; Python, Q; Strom, D; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Villella, I; Caillol, C; Clerbaux, B; De Lentdecker, G; Dobur, D; Favart, L; Gay, A P R; Grebenyuk, A; Léonard, A; Mohammadi, A; Perniè, L; Randle-Conde, A; Reis, T; Seva, T; Thomas, L; Vander Velde, C; Vanlaer, P; Wang, J; Zenoni, F; Adler, V; Beernaert, K; Benucci, L; Cimmino, A; Costantini, S; Crucy, S; Fagot, A; Garcia, G; Mccartin, J; Ocampo Rios, A A; Poyraz, D; Ryckbosch, D; Salva Diblen, S; Sigamani, M; Strobbe, N; Thyssen, F; Tytgat, M; Yazgan, E; Zaganidis, N; Basegmez, S; Beluffi, C; Bruno, G; Castello, R; Caudron, A; Ceard, L; Da Silveira, G G; Delaere, C; du Pree, T; Favart, D; Forthomme, L; Giammanco, A; Hollar, J; Jafari, A; Jez, P; Komm, M; Lemaitre, V; Nuttens, C; Pagano, D; Perrini, L; Pin, A; Piotrzkowski, K; Popov, A; Quertenmont, L; Selvaggi, M; Vidal Marono, M; Vizan Garcia, J M; Beliy, N; Caebergs, T; Daubie, E; Hammad, G H; Júnior, W L Aldá; Alves, G A; Brito, L; Correa Martins Junior, M; Martins, T Dos Reis; Molina, J; Mora Herrera, C; Pol, M E; Rebello Teles, P; Carvalho, W; Chinellato, J; Custódio, A; Da Costa, E M; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Malbouisson, H; Matos Figueiredo, D; Mundim, L; Nogima, H; Prado Da Silva, W L; Santaolalla, J; Santoro, A; Sznajder, A; Tonelli Manganote, E J; Vilela Pereira, A; Bernardes, C A; Dogra, S; Fernandez Perez Tomei, T R; Gregores, E M; Mercadante, P G; Novaes, S F; Padula, Sandra S; Aleksandrov, A; Genchev, V; Hadjiiska, R; Iaydjiev, P; Marinov, A; Piperov, S; Rodozov, M; Stoykova, S; Sultanov, G; Vutova, M; Dimitrov, A; Glushkov, I; Litov, L; Pavlov, B; Petkov, P; Bian, J G; Chen, G M; Chen, H S; Chen, M; Cheng, T; Du, R; Jiang, C H; Plestina, R; Romeo, F; Tao, J; Wang, Z; Asawatangtrakuldee, C; Ban, Y; Liu, S; Mao, Y; Qian, S J; Wang, D; Xu, Z; Zhang, F; Zhang, L; Zou, W; Avila, C; Cabrera, A; Chaparro Sierra, L F; Florez, C; Gomez, J P; Gomez Moreno, B; Sanabria, J C; Godinovic, N; Lelas, D; Polic, D; Puljak, I; Antunovic, Z; Kovac, M; Brigljevic, V; Kadija, K; Luetic, J; Mekterovic, D; Sudic, L; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Rykaczewski, H; Bodlak, M; Finger, M; Finger, M; Assran, Y; Ellithi Kamel, A; Mahmoud, M A; Radi, A; Kadastik, M; Murumaa, M; Raidal, M; Tiko, A; Eerola, P; Voutilainen, M; Härkönen, J; Karimäki, V; Kinnunen, R; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Peltola, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Wendland, L; Talvitie, J; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Favaro, C; Ferri, F; Ganjour, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Locci, E; Malcles, J; Rander, J; Rosowsky, A; Titov, M; Baffioni, S; Beaudette, F; Busson, P; Chapon, E; Charlot, C; Dahms, T; Dobrzynski, L; Filipovic, N; Florent, A; Granier de Cassagnac, R; Mastrolorenzo, L; Miné, P; Naranjo, I N; Nguyen, M; Ochando, C; Ortona, G; Paganini, P; Regnard, S; Salerno, R; Sauvan, J B; Sirois, Y; Veelken, C; Yilmaz, Y; Zabi, A; Agram, J-L; Andrea, J; Aubin, A; Bloch, D; Brom, J-M; Chabert, E C; Chanon, N; Collard, C; Conte, E; Fontaine, J-C; Gelé, D; Goerlach, U; Goetzmann, C; Le Bihan, A-C; Skovpen, K; Van Hove, P; Gadrat, S; Beauceron, S; Beaupere, N; Bernet, C; Boudoul, G; Bouvier, E; Brochet, S; Carrillo Montoya, C A; Chasserat, J; Chierici, R; Contardo, D; Courbon, B; Depasse, P; El Mamouni, H; Fan, J; Fay, J; Gascon, S; Gouzevitch, M; Ille, B; Kurca, T; Lethuillier, M; Mirabito, L; Pequegnot, A L; Perries, S; Ruiz Alvarez, J D; Sabes, D; Sgandurra, L; Sordini, V; Vander Donckt, M; Verdier, P; Viret, S; Xiao, H; Tsamalaidze, Z; Autermann, C; Beranek, S; Bontenackels, M; Edelhoff, M; Feld, L; Heister, A; Klein, K; Lipinski, M; Ostapchuk, A; Preuten, M; Raupach, F; Sammet, J; Schael, S; Schulte, J F; Weber, H; Wittmer, B; Zhukov, V; Ata, M; Brodski, M; Dietz-Laursonn, E; Duchardt, D; Erdmann, M; Fischer, R; Güth, A; Hebbeker, T; Heidemann, C; Hoepfner, K; Klingebiel, D; Knutzen, S; Kreuzer, P; Merschmeyer, M; Meyer, A; Mittag, G; Millet, P; Olschewski, M; Padeken, K; Papacz, P; Reithler, H; Schmitz, S A; Sonnenschein, L; Teyssier, D; Thüer, S; Cherepanov, V; Erdogan, Y; Flügge, G; Geenen, H; Geisler, M; Haj Ahmad, W; Hoehle, F; Kargoll, B; Kress, T; Kuessel, Y; Künsken, A; Lingemann, J; Nowack, A; Nugent, I M; Pistone, C; Pooth, O; Stahl, A; Aldaya Martin, M; Asin, I; Bartosik, N; Behr, J; Behrens, U; Bell, A J; Bethani, A; Borras, K; Burgmeier, A; Cakir, A; Calligaris, L; Campbell, A; Choudhury, S; Costanza, F; Diez Pardos, C; Dolinska, G; Dooling, S; Dorland, T; Eckerlin, G; Eckstein, D; Eichhorn, T; Flucke, G; Garcia, J Garay; Geiser, A; Gizhko, A; Gunnellini, P; Hauk, J; Hempel, M; Jung, H; Kalogeropoulos, A; Karacheban, O; Kasemann, M; Katsas, P; Kieseler, J; Kleinwort, C; Korol, I; Krücker, D; Lange, W; Leonard, J; Lipka, K; Lobanov, A; Lohmann, W; Lutz, B; Mankel, R; Marfin, I; Melzer-Pellmann, I-A; Meyer, A B; Mnich, J; Mussgiller, A; Naumann-Emme, S; Nayak, A; Ntomari, E; Perrey, H; Pitzl, D; Placakyte, R; Raspereza, A; Ribeiro Cipriano, P M; Roland, B; Ron, E; Sahin, M Ö; Salfeld-Nebgen, J; Saxena, P; Schoerner-Sadenius, T; Schröder, M; Seitz, C; Spannagel, S; Vargas Trevino, A D R; Walsh, R; Wissing, C; Blobel, V; Centis Vignali, M; Draeger, A R; Erfle, J; Garutti, E; Goebel, K; Görner, M; Haller, J; Hoffmann, M; Höing, R S; Junkes, A; Kirschenmann, H; Klanner, R; Kogler, R; Lapsien, T; Lenz, T; Marchesini, I; Marconi, D; Nowatschin, D; Ott, J; Peiffer, T; Perieanu, A; Pietsch, N; Poehlsen, J; Poehlsen, T; Rathjens, D; Sander, C; Schettler, H; Schleper, P; Schlieckau, E; Schmidt, A; Seidel, M; Sola, V; Stadie, H; Steinbrück, G; Troendle, D; Usai, E; Vanelderen, L; Vanhoefer, A; Akbiyik, M; Barth, C; Baus, C; Berger, J; Böser, C; Butz, E; Chwalek, T; De Boer, W; Descroix, A; Dierlamm, A; Feindt, M; Frensch, F; Giffels, M; Gilbert, A; Hartmann, F; Hauth, T; Husemann, U; Katkov, I; Kornmayer, A; Lobelle Pardo, P; Mozer, M U; Müller, T; Müller, Th; Nürnberg, A; Quast, G; Rabbertz, K; Röcker, S; Simonis, H J; Stober, F M; Ulrich, R; Wagner-Kuhr, J; Wayand, S; Weiler, T; Wöhrmann, C; Wolf, R; Anagnostou, G; Daskalakis, G; Geralis, T; Giakoumopoulou, V A; Kyriakis, A; Loukas, D; Markou, A; Markou, C; Psallidas, A; Topsis-Giotis, I; Agapitos, A; Kesisoglou, S; Panagiotou, A; Saoulidou, N; Stiliaris, E; Tziaferi, E; Aslanoglou, X; Evangelou, I; Flouris, G; Foudas, C; Kokkas, P; Manthos, N; Papadopoulos, I; Strologas, J; Paradas, E; Bencze, G; Hajdu, C; Hidas, P; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Zsigmond, A J; Beni, N; Czellar, S; Karancsi, J; Molnar, J; Palinkas, J; Szillasi, Z; Makovec, A; Raics, P; Trocsanyi, Z L; Ujvari, B; Swain, S K; Beri, S B; Bhatnagar, V; Gupta, R; Bhawandeep, U; Kalsi, A K; Kaur, M; Kumar, R; Mittal, M; Nishu, N; Singh, J B; Kumar, Ashok; Kumar, Arun; Ahuja, S; Bhardwaj, A; Choudhary, B C; Kumar, A; Malhotra, S; Naimuddin, M; Ranjan, K; Sharma, V; Banerjee, S; Bhattacharya, S; Chatterjee, K; Dutta, S; Gomber, B; Jain, Sa; Jain, Sh; Khurana, R; Modak, A; Mukherjee, S; Roy, D; Sarkar, S; Sharan, M; Abdulsalam, A; Dutta, D; Kumar, V; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Banerjee, S; Bhowmik, S; Chatterjee, R M; Dewanjee, R K; Dugad, S; Ganguly, S; Ghosh, S; Guchait, M; Gurtu, A; Kole, G; Kumar, S; Maity, M; Majumder, G; Mazumdar, K; Mohanty, G B; Parida, B; Sudhakar, K; Wickramage, N; Sharma, S; Bakhshiansohi, H; Behnamian, H; Etesami, S M; Fahim, A; Goldouzian, R; Khakzad, M; Mohammadi Najafabadi, M; Naseri, M; Paktinat Mehdiabadi, S; Rezaei Hosseinabadi, F; Safarzadeh, B; Zeinali, M; Felcini, M; Grunewald, M; Abbrescia, M; Calabria, C; Chhibra, S S; Colaleo, A; Creanza, D; Cristella, L; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Maggi, M; My, S; Nuzzo, S; Pompili, A; Pugliese, G; Radogna, R; Selvaggi, G; Sharma, A; Silvestris, L; Venditti, R; Verwilligen, P; Abbiendi, G; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Perrotta, A; Rossi, A M; Rovelli, T; Siroli, G P; Tosi, N; Travaglini, R; Albergo, S; Cappello, G; Chiorboli, M; Costa, S; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Gallo, E; Gonzi, S; Gori, V; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Ferretti, R; Ferro, F; Lo Vetere, M; Robutti, E; Tosi, S; Dinardo, M E; Fiorendi, S; Gennai, S; Gerosa, R; Ghezzi, A; Govoni, P; Lucchini, M T; Malvezzi, S; Manzoni, R A; Martelli, A; Marzocchi, B; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; Tabarelli de Fatis, T; Buontempo, S; Cavallo, N; Di Guida, S; Fabozzi, F; Iorio, A O M; Lista, L; Meola, S; Merola, M; Paolucci, P; Azzi, P; Bacchetta, N; Bisello, D; Carlin, R; Checchia, P; Dall'Osso, M; Dorigo, T; Dosselli, U; Fanzago, F; Gasparini, F; Gasparini, U; Gonella, F; Gozzelino, A; Lacaprara, S; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Simonetto, F; Torassa, E; Tosi, M; Zotto, P; Zucchetta, A; Zumerle, G; Gabusi, M; Ratti, S P; Re, V; Riccardi, C; Salvini, P; Vitulo, P; Biasini, M; Bilei, G M; Ciangottini, D; Fanò, L; Lariccia, P; Mantovani, G; Menichelli, M; Saha, A; Santocchia, A; Spiezia, A; Androsov, K; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Broccolo, G; Castaldi, R; Ciocci, M A; Dell'Orso, R; Donato, S; Fedi, G; Fiori, F; Foà, L; Giassi, A; Grippo, M T; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Moon, C S; Palla, F; Rizzi, A; Savoy-Navarro, A; Serban, A T; Spagnolo, P; Squillacioti, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Vernieri, C; Barone, L; Cavallari, F; D'imperio, G; Del Re, D; Diemoz, M; Jorda, C; Longo, E; Margaroli, F; Meridiani, P; Micheli, F; Organtini, G; Paramatti, R; Rahatlou, S; Rovelli, C; Santanastasio, F; Soffi, L; Traczyk, P; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bellan, R; Biino, C; Cartiglia, N; Casasso, S; Costa, M; Covarelli, R; Degano, A; Demaria, N; Finco, L; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Musich, M; Obertino, M M; Pacher, L; Pastrone, N; Pelliccioni, M; Pinna Angioni, G L; Potenza, A; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Tamponi, U; Belforte, S; Candelise, V; Casarsa, M; Cossutti, F; Della Ricca, G; Gobbo, B; La Licata, C; Marone, M; Schizzi, A; Umer, T; Zanetti, A; Chang, S; Kropivnitskaya, A; Nam, S K; Kim, D H; Kim, G N; Kim, M S; Kim, M S; Kong, D J; Lee, S; Oh, Y D; Park, H; Sakharov, A; Son, D C; Kim, T J; Ryu, M S; Kim, J Y; Moon, D H; Song, S; Choi, S; Gyun, D; Hong, B; Jo, M; Kim, H; Kim, Y; Lee, B; Lee, K S; Park, S K; Roh, Y; Yoo, H D; Choi, M; Kim, J H; Park, I C; Ryu, G; Choi, Y; Choi, Y K; Goh, J; Kim, D; Kwon, E; Lee, J; Yu, I; Juodagalvis, A; Komaragiri, J R; Md Ali, M A B; Wan Abdullah, W A T; Casimiro Linares, E; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-de La Cruz, I; Hernandez-Almada, A; Lopez-Fernandez, R; Sanchez-Hernandez, A; Carrillo Moreno, S; Vazquez Valencia, F; Pedraza, I; Salazar Ibarguen, H A; Morelos Pineda, A; Krofcheck, D; Butler, P H; Reucroft, S; Ahmad, A; Ahmad, M; Hassan, Q; Hoorani, H R; Khan, W A; Khurshid, T; Shoaib, M; Bialkowska, H; Bluj, M; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Zalewski, P; Brona, G; Bunkowski, K; Cwiok, M; Dominik, W; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Olszewski, M; Bargassa, P; Beirão Da Cruz E Silva, C; Di Francesco, A; Faccioli, P; Ferreira Parracho, P G; Gallinaro, M; Lloret Iglesias, L; Nguyen, F; Rodrigues Antunes, J; Seixas, J; Toldaiev, O; Vadruccio, D; Varela, J; Vischia, P; Bunin, P; Gavrilenko, M; Golutvin, I; Kamenev, A; Karjavin, V; Konoplyanikov, V; Kozlov, G; Lanev, A; Malakhov, A; Matveev, V; Moisenz, P; Palichik, V; Perelygin, V; Savina, M; Shmatov, S; Shulha, S; Smirnov, V; Zarubin, A; Golovtsov, V; Ivanov, Y; Kim, V; Kuznetsova, E; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Vorobyev, An; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Gavrilov, V; Lychkovskaya, N; Popov, V; Pozdnyakov, I; Safronov, G; Semenov, S; Spiridonov, A; Stolin, V; Vlasov, E; Zhokin, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Mesyats, G; Rusakov, S V; Vinogradov, A; Belyaev, A; Boos, E; Bunichev, V; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Obraztsov, S; Petrushanko, S; Savrin, V; Azhgirey, I; Bayshev, I; Bitioukov, S; Kachanov, V; Kalinin, A; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Ekmedzic, M; Milosevic, J; Rekovic, V; Alcaraz Maestre, J; Battilana, C; Calvo, E; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Domínguez Vázquez, D; Escalante Del Valle, A; Fernandez Bedoya, C; Fernández Ramos, J P; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Navarro De Martino, E; Pérez-Calero Yzquierdo, A; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Soares, M S; Albajar, C; de Trocóniz, J F; Missiroli, M; Moran, D; Brun, H; Cuevas, J; Fernandez Menendez, J; Folgueras, S; Gonzalez Caballero, I; Brochero Cifuentes, J A; Cabrillo, I J; Calderon, A; Duarte Campderros, J; Fernandez, M; Gomez, G; Graziano, A; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Matorras, F; Munoz Sanchez, F J; Piedra Gomez, J; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Bachtis, M; Baillon, P; Ball, A H; Barney, D; Benaglia, A; Bendavid, J; Benhabib, L; Benitez, J F; Bloch, P; Bocci, A; Bonato, A; Bondu, O; Botta, C; Breuker, H; Camporesi, T; Cerminara, G; Colafranceschi, S; D'Alfonso, M; d'Enterria, D; Dabrowski, A; David, A; De Guio, F; De Roeck, A; De Visscher, S; Di Marco, E; Dobson, M; Dordevic, M; Dorney, B; Dupont-Sagorin, N; Elliott-Peisert, A; Franzoni, G; Funk, W; Gigi, D; Gill, K; Giordano, D; Girone, M; Glege, F; Guida, R; Gundacker, S; Guthoff, M; Guida, R; Hammer, J; Hansen, M; Harris, P; Hegeman, J; Innocente, V; Janot, P; Kortelainen, M J; Kousouris, K; Krajczar, K; Lecoq, P; Lourenço, C; Magini, N; Malgeri, L; Mannelli, M; Marrouche, J; Masetti, L; Meijers, F; Mersi, S; Meschi, E; Moortgat, F; Morovic, S; Mulders, M; Orfanelli, S; Orsini, L; Pape, L; Perez, E; Petrilli, A; Petrucciani, G; Pfeiffer, A; Pimiä, M; Piparo, D; Plagge, M; Racz, A; Rolandi, G; Rovere, M; Sakulin, H; Schäfer, C; Schwick, C; Sharma, A; Siegrist, P; Silva, P; Simon, M; Sphicas, P; Spiga, D; Steggemann, J; Stieger, B; Stoye, M; Takahashi, Y; Treille, D; Tsirou, A; Veres, G I; Wardle, N; Wöhri, H K; Wollny, H; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Horisberger, R; Ingram, Q; Kaestli, H C; Kotlinski, D; Langenegger, U; Renker, D; Rohe, T; Bachmair, F; Bäni, L; Bianchini, L; Buchmann, M A; Casal, B; Dissertori, G; Dittmar, M; Donegà, M; Dünser, M; Eller, P; Grab, C; Hits, D; Hoss, J; Kasieczka, G; Lustermann, W; Mangano, B; Marini, A C; Marionneau, M; Martinez Ruiz Del Arbol, P; Masciovecchio, M; Meister, D; Mohr, N; Musella, P; Nägeli, C; Nessi-Tedaldi, F; Pandolfi, F; Pauss, F; Perrozzi, L; Peruzzi, M; Quittnat, M; Rebane, L; Rossini, M; Starodumov, A; Takahashi, M; Theofilatos, K; Wallny, R; Weber, H A; Amsler, C; Canelli, M F; Chiochia, V; De Cosa, A; Hinzmann, A; Hreus, T; Kilminster, B; Lange, C; Ngadiuba, J; Pinna, D; Robmann, P; Ronga, F J; Salerno, D; Taroni, S; Yang, Y; Cardaci, M; Chen, K H; Ferro, C; Kuo, C M; Lin, W; Lu, Y J; Volpe, R; Yu, S S; Chang, P; Chang, Y H; Chao, Y; Chen, K F; Chen, P H; Dietz, C; Grundler, U; Hou, W-S; Liu, Y F; Lu, R-S; Miñano Moya, M; Petrakou, E; Tsai, J F; Tzeng, Y M; Wilken, R; Asavapibhop, B; Singh, G; Srimanobhas, N; Suwonjandee, N; Adiguzel, A; Bakirci, M N; Cerci, S; Dozen, C; Dumanoglu, I; Eskut, E; Girgis, S; Gokbulut, G; Guler, Y; Gurpinar, E; Hos, I; Kangal, E E; Kayis Topaksu, A; Onengut, G; Ozdemir, K; Ozturk, S; Polatoz, A; Sunar Cerci, D; Tali, B; Topakli, H; Vergili, M; Zorbilmez, C; Akin, I V; Bilin, B; Bilmis, S; Gamsizkan, H; Isildak, B; Karapinar, G; Ocalan, K; Sekmen, S; Surat, U E; Yalvac, M; Zeyrek, M; Albayrak, E A; Gülmez, E; Kaya, M; Kaya, O; Yetkin, T; Cankocak, K; Vardarlı, F I; Levchuk, L; Sorokin, P; Brooke, J J; Clement, E; Cussans, D; Flacher, H; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Jacob, J; Kreczko, L; Lucas, C; Meng, Z; Newbold, D M; Paramesvaran, S; Poll, A; Sakuma, T; Seif El Nasr-Storey, S; Senkin, S; Smith, V J; Williams, T; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Olaiya, E; Petyt, D; Shepherd-Themistocleous, C H; Thea, A; Tomalin, I R; Williams, T; Womersley, W J; Worm, S D; Baber, M; Bainbridge, R; Buchmuller, O; Burton, D; Colling, D; Cripps, N; Dauncey, P; Davies, G; De Wit, A; Della Negra, M; Dunne, P; Elwood, A; Ferguson, W; Fulcher, J; Futyan, D; Hall, G; Iles, G; Jarvis, M; Karapostoli, G; Kenzie, M; Lane, R; Lucas, R; Lyons, L; Magnan, A-M; Malik, S; Mathias, B; Nash, J; Nikitenko, A; Pela, J; Pesaresi, M; Petridis, K; Raymond, D M; Rogerson, S; Rose, A; Seez, C; Sharp, P; Tapper, A; Vazquez Acosta, M; Virdee, T; Zenz, S C; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leggat, D; Leslie, D; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Dittmann, J; Hatakeyama, K; Kasmi, A; Liu, H; Pastika, N; Scarborough, T; Wu, Z; Charaf, O; Cooper, S I; Henderson, C; Rumerio, P; Avetisyan, A; Bose, T; Fantasia, C; Lawson, P; Richardson, C; Rohlf, J; St John, J; Sulak, L; Zou, D; Alimena, J; Berry, E; Bhattacharya, S; Christopher, G; Cutts, D; Demiragli, Z; Dhingra, N; Ferapontov, A; Garabedian, A; Heintz, U; Laird, E; Landsberg, G; Mao, Z; Narain, M; Sagir, S; Sinthuprasith, T; Speer, T; Swanson, J; Breedon, R; Breto, G; Calderon De La Barca Sanchez, M; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Gardner, M; Ko, W; Lander, R; Mulhearn, M; Pellett, D; Pilot, J; Ricci-Tam, F; Shalhout, S; Smith, J; Squires, M; Stolp, D; Tripathi, M; Wilbur, S; Yohay, R; Cousins, R; Everaerts, P; Farrell, C; Hauser, J; Ignatenko, M; Rakness, G; Takasugi, E; Valuev, V; Weber, M; Burt, K; Clare, R; Ellison, J; Gary, J W; Hanson, G; Heilman, J; Ivova Rikova, M; Jandir, P; Kennedy, E; Lacroix, F; Long, O R; Luthra, A; Malberti, M; Negrete, M Olmedo; Shrinivas, A; Sumowidagdo, S; Wimpenny, S; Branson, J G; Cerati, G B; Cittolin, S; D'Agnolo, R T; Holzner, A; Kelley, R; Klein, D; Letts, J; Macneill, I; Olivito, D; Padhi, S; Palmer, C; Pieri, M; Sani, M; Sharma, V; Simon, S; Tadel, M; Tu, Y; Vartak, A; Welke, C; Würthwein, F; Yagil, A; Zevi Della Porta, G; Barge, D; Bradmiller-Feld, J; Campagnari, C; Danielson, T; Dishaw, A; Dutta, V; Flowers, K; Franco Sevilla, M; Geffert, P; George, C; Golf, F; Gouskos, L; Incandela, J; Justus, C; Mccoll, N; Mullin, S D; Richman, J; Stuart, D; To, W; West, C; Yoo, J; Apresyan, A; Bornheim, A; Bunn, J; Chen, Y; Duarte, J; Mott, A; Newman, H B; Pena, C; Pierini, M; Spiropulu, M; Vlimant, J R; Wilkinson, R; Xie, S; Zhu, R Y; Azzolini, V; Calamba, A; Carlson, B; Ferguson, T; Iiyama, Y; Paulini, M; Russ, J; Vogel, H; Vorobiev, I; Cumalat, J P; Ford, W T; Gaz, A; Krohn, M; Luiggi Lopez, E; Nauenberg, U; Smith, J G; Stenson, K; Wagner, S R; Alexander, J; Chatterjee, A; Chaves, J; Chu, J; Dittmer, S; Eggert, N; Mirman, N; Nicolas Kaufman, G; Patterson, J R; Ryd, A; Salvati, E; Skinnari, L; Sun, W; Teo, W D; Thom, J; Thompson, J; Tucker, J; Weng, Y; Winstrom, L; Wittich, P; Winn, D; Abdullin, S; Albrow, M; Anderson, J; Apollinari, G; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bolla, G; Burkett, K; Butler, J N; Cheung, H W K; Chlebana, F; Cihangir, S; Elvira, V D; Fisk, I; Freeman, J; Gottschalk, E; Gray, L; Green, D; Grünendahl, S; Gutsche, O; Hanlon, J; Hare, D; Harris, R M; Hirschauer, J; Hooberman, B; Jindariani, S; Johnson, M; Joshi, U; Klima, B; Kreis, B; Kwan, S; Linacre, J; Lincoln, D; Lipton, R; Liu, T; Lopes De Sá, R; Lykken, J; Maeshima, K; Marraffino, J M; Martinez Outschoorn, V I; Maruyama, S; Mason, D; McBride, P; Merkel, P; Mishra, K; Mrenna, S; Nahn, S; Newman-Holmes, C; O'Dell, V; Prokofyev, O; Sexton-Kennedy, E; Soha, A; Spalding, W J; Spiegel, L; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vidal, R; Whitbeck, A; Whitmore, J; Yang, F; Acosta, D; Avery, P; Bortignon, P; Bourilkov, D; Carver, M; Curry, D; Das, S; De Gruttola, M; Di Giovanni, G P; Field, R D; Fisher, M; Furic, I K; Hugon, J; Konigsberg, J; Korytov, A; Kypreos, T; Low, J F; Matchev, K; Mei, H; Milenovic, P; Mitselmakher, G; Muniz, L; Rinkevicius, A; Shchutska, L; Snowball, M; Sperka, D; Yelton, J; Zakaria, M; Hewamanage, S; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, J R; Adams, T; Askew, A; Bochenek, J; Diamond, B; Haas, J; Hagopian, S; Hagopian, V; Johnson, K F; Prosper, H; Veeraraghavan, V; Weinberg, M; Baarmand, M M; Hohlmann, M; Kalakhety, H; Yumiceva, F; Adams, M R; Apanasevich, L; Berry, D; Betts, R R; Bucinskaite, I; Cavanaugh, R; Evdokimov, O; Gauthier, L; Gerber, C E; Hofman, D J; Kurt, P; O'Brien, C; Sandoval Gonzalez, I D; Silkworth, C; Turner, P; Varelas, N; Bilki, B; Clarida, W; Dilsiz, K; Haytmyradov, M; Khristenko, V; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Ogul, H; Onel, Y; Ozok, F; Penzo, A; Rahmat, R; Sen, S; Tan, P; Tiras, E; Wetzel, J; Yi, K; Anderson, I; Barnett, B A; Blumenfeld, B; Bolognesi, S; Fehling, D; Gritsan, A V; Maksimovic, P; Martin, C; Swartz, M; Xiao, M; Baringer, P; Bean, A; Benelli, G; Bruner, C; Gray, J; Kenny, R P; Majumder, D; Malek, M; Murray, M; Noonan, D; Sanders, S; Sekaric, J; Stringer, R; Wang, Q; Wood, J S; Chakaberia, I; Ivanov, A; Kaadze, K; Khalil, S; Makouski, M; Maravin, Y; Saini, L K; Skhirtladze, N; Svintradze, I; Gronberg, J; Lange, D; Rebassoo, F; Wright, D; Anelli, C; Baden, A; Belloni, A; Calvert, B; Eno, S C; Gomez, J A; Hadley, N J; Jabeen, S; Kellogg, R G; Kolberg, T; Lu, Y; Mignerey, A C; Pedro, K; Shin, Y H; Skuja, A; Tonjes, M B; Tonwar, S C; Apyan, A; Barbieri, R; Baty, A; Bierwagen, K; Brandt, S; Busza, W; Cali, I A; Di Matteo, L; Gomez Ceballos, G; Goncharov, M; Gulhan, D; Klute, M; Lai, Y S; Lee, Y-J; Levin, A; Luckey, P D; Paus, C; Ralph, D; Roland, C; Roland, G; Stephans, G S F; Sumorok, K; Velicanu, D; Veverka, J; Wyslouch, B; Yang, M; Yoon, A S; Zanetti, M; Zhukova, V; Dahmes, B; De Benedetti, A; Gude, A; Kao, S C; Klapoetke, K; Kubota, Y; Mans, J; Nourbakhsh, S; Rusack, R; Singovsky, A; Tambe, N; Turkewitz, J; Acosta, J G; Cremaldi, L M; Kroeger, R; Oliveros, S; Perera, L; Sanders, D A; Summers, D; Avdeeva, E; Bloom, K; Bose, S; Claes, D R; Dominguez, A; Gonzalez Suarez, R; Keller, J; Knowlton, D; Kravchenko, I; Lazo-Flores, J; Meier, F; Ratnikov, F; Snow, G R; Zvada, M; Dolen, J; Godshalk, A; Iashvili, I; Jain, S; Kharchilava, A; Kumar, A; Rappoccio, S; Alverson, G; Barberis, E; Baumgartel, D; Chasco, M; Massironi, A; Nash, D; Orimoto, T; Trocino, D; Wood, D; Zhang, J; Anastassov, A; Hahn, K A; Kubik, A; Lusito, L; Mucia, N; Odell, N; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Sung, K; Trovato, M; Velasco, M; Won, S; Brinkerhoff, A; Chan, K M; Drozdetskiy, A; Hildreth, M; Jessop, C; Karmgard, D J; Kellams, N; Lannon, K; Lynch, S; Marinelli, N; Musienko, Y; Pearson, T; Planer, M; Ruchti, R; Valls, N; Smith, G; Wayne, M; Wolf, M; Woodard, A; Antonelli, L; Brinson, J; Bylsma, B; Durkin, L S; Flowers, S; Hart, A; Hill, C; Hughes, R; Kotov, K; Ling, T Y; Luo, W; Puigh, D; Rodenburg, M; Winer, B L; Wolfe, H; Wulsin, H W; Driga, O; Elmer, P; Hardenbrook, J; Hebda, P; Koay, S A; Lujan, P; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Quan, X; Saka, H; Stickland, D; Tully, C; Werner, J S; Zuranski, A; Brownson, E; Malik, S; Mendez, H; Ramirez Vargas, J E; Barnes, V E; Benedetti, D; Bortoletto, D; Gutay, L; Hu, Z; Jha, M K; Jones, M; Jung, K; Kress, M; Leonardo, N; Miller, D H; Neumeister, N; Primavera, F; Radburn-Smith, B C; Shi, X; Shipsey, I; Silvers, D; Svyatkovskiy, A; Wang, F; Xie, W; Xu, L; Zablocki, J; Parashar, N; Stupak, J; Adair, A; Akgun, B; Ecklund, K M; Geurts, F J M; Li, W; Michlin, B; Padley, B P; Redjimi, R; Roberts, J; Zabel, J; Betchart, B; Bodek, A; de Barbaro, P; Demina, R; Eshaq, Y; Ferbel, T; Galanti, M; Garcia-Bellido, A; Goldenzweig, P; Han, J; Harel, A; Hindrichs, O; Khukhunaishvili, A; Korjenevski, S; Petrillo, G; Verzetti, M; Vishnevskiy, D; Ciesielski, R; Demortier, L; Goulianos, K; Mesropian, C; Arora, S; Barker, A; Chou, J P; Contreras-Campana, C; Contreras-Campana, E; Duggan, D; Ferencek, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Hughes, E; Kaplan, S; Kunnawalkam Elayavalli, R; Lath, A; Panwalkar, S; Park, M; Salur, S; Schnetzer, S; Sheffield, D; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Rose, K; Spanier, S; York, A; Bouhali, O; Castaneda Hernandez, A; Dalchenko, M; De Mattia, M; Dildick, S; Eusebi, R; Flanagan, W; Gilmore, J; Kamon, T; Khotilovich, V; Krutelyov, V; Montalvo, R; Osipenkov, I; Pakhotin, Y; Patel, R; Perloff, A; Roe, J; Rose, A; Safonov, A; Suarez, I; Tatarinov, A; Ulmer, K A; Akchurin, N; Cowden, C; Damgov, J; Dragoiu, C; Dudero, P R; Faulkner, J; Kovitanggoon, K; Kunori, S; Lee, S W; Libeiro, T; Volobouev, I; Appelt, E; Delannoy, A G; Greene, S; Gurrola, A; Johns, W; Maguire, C; Mao, Y; Melo, A; Sharma, M; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Arenton, M W; Boutle, S; Cox, B; Francis, B; Goodell, J; Hirosky, R; Ledovskoy, A; Li, H; Lin, C; Neu, C; Wolfe, E; Wood, J; Clarke, C; Harr, R; Karchin, P E; Kottachchi Kankanamge Don, C; Lamichhane, P; Sturdy, J; Belknap, D A; Carlsmith, D; Cepeda, M; Dasu, S; Dodd, L; Duric, S; Friis, E; Hall-Wilton, R; Herndon, M; Hervé, A; Klabbers, P; Lanaro, A; Lazaridis, C; Levine, A; Loveless, R; Mohapatra, A; Ojalvo, I; Perry, T; Pierro, G A; Polese, G; Ross, I; Sarangi, T; Savin, A; Smith, W H; Taylor, D; Vuosalo, C; Woods, N; Collaboration, Authorinst The Cms
A search for a standard model Higgs boson produced in association with a top-quark pair and decaying to bottom quarks is presented. Events with hadronic jets and one or two oppositely charged leptons are selected from a data sample corresponding to an integrated luminosity of 19.5[Formula: see text] collected by the CMS experiment at the LHC in [Formula: see text] collisions at a centre-of-mass energy of 8[Formula: see text]. In order to separate the signal from the larger [Formula: see text] + jets background, this analysis uses a matrix element method that assigns a probability density value to each reconstructed event under signal or background hypotheses. The ratio between the two values is used in a maximum likelihood fit to extract the signal yield. The results are presented in terms of the measured signal strength modifier, [Formula: see text], relative to the standard model prediction for a Higgs boson mass of 125[Formula: see text]. The observed (expected) exclusion limit at a 95 % confidence level is [Formula: see text] (3.3), corresponding to a best fit value [Formula: see text].
Hurtado, Luis A; Santamaria, Carlos A; Fitzgerald, Lee A
2014-05-06
The phylogenetic position of the critically endangered Saint Croix ground lizard Ameiva polops is presently unknown and several hypotheses have been proposed. We investigated the phylogenetic position of this species using molecular phylogenetic methods. We obtained sequences of DNA fragments of the mitochondrial ribosomal genes 12S rDNA and 16S rDNA for this species. We aligned these sequences with published sequences of other Ameiva species, which include most of the Ameiva species from the West Indies, three Ameiva species from Central America and South America, and one from the teiid lizard Tupinambis teguixin, which was used as outgroup. We conducted Maximum Likelihood and Bayesian phylogenetic analyses. The phylogenetic reconstructions among the different methods were very similar, supporting the monophyly of West Indian Ameiva and showing within this lineage, a basal polytomy of four clades that are separated geographically. Ameiva polops grouped in a cluster that included the other two Ameiva species found in the Puerto Rican Bank: A. wetmorei and A. exsul. A sister relationship between A. polops and A. wetmorei is suggested by our analyses. We compare our results with a previous study on molecular systematics of West Indian Ameiva.
Input reconstruction of chaos sensors.
Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin
2008-06-01
Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Gao, H
2016-06-15
Purpose: Different from the conventional computed tomography (CT), spectral CT based on energy-resolved photon-counting detectors is able to provide the unprecedented material composition. However, an important missing piece for accurate spectral CT is to incorporate the detector response function (DRF), which is distorted by factors such as pulse pileup and charge-sharing. In this work, we propose material reconstruction methods for spectral CT with DRF. Methods: The polyenergetic X-ray forward model takes the DRF into account for accurate material reconstruction. Two image reconstruction methods are proposed: a direct method based on the nonlinear data fidelity from DRF-based forward model; a linear-data-fidelitymore » based method that relies on the spectral rebinning so that the corresponding DRF matrix is invertible. Then the image reconstruction problem is regularized with the isotropic TV term and solved by alternating direction method of multipliers. Results: The simulation results suggest that the proposed methods provided more accurate material compositions than the standard method without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Conclusion: We have proposed material reconstruction methods for spectral CT with DRF, whichprovided more accurate material compositions than the standard methods without DRF. Moreover, the proposed method with linear data fidelity had improved reconstruction quality from the proposed method with nonlinear data fidelity. Jiulong Liu and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2012-01-01
We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures
ERIC Educational Resources Information Center
Jeon, Minjeong; Rabe-Hesketh, Sophia
2012-01-01
In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
2012-01-01
Background Gene duplication and the subsequent divergence in function of the resulting paralogs via subfunctionalization and/or neofunctionalization is hypothesized to have played a major role in the evolution of plant form. The LEAFY HULL STERILE1 (LHS1) SEPALLATA (SEP) genes have been linked with the origin and diversification of the grass spikelet, but it is uncertain 1) when the duplication event that produced the LHS1 clade and its paralogous lineage Oryza sativa MADS5 (OSM5) occurred, and 2) how changes in gene structure and/or expression might have contributed to subfunctionalization and/or neofunctionalization in the two lineages. Methods Phylogenetic relationships among 84 SEP genes were estimated using Bayesian methods. RNA expression patterns were inferred using in situ hybridization. The patterns of protein sequence and RNA expression evolution were reconstructed using maximum parsimony (MP) and maximum likelihood (ML) methods, respectively. Results Phylogenetic analyses mapped the LHS1/OSM5 duplication event to the base of the grass family. MP character reconstructions estimated a change from cytosine to thymine in the first codon position of the first amino acid after the Zea mays MADS3 (ZMM3) domain converted a glutamine to a stop codon in the OSM5 ancestor following the LHS1/OSM5 duplication event. RNA expression analyses of OSM5 co-orthologs in Avena sativa, Chasmanthium latifolium, Hordeum vulgare, Pennisetum glaucum, and Sorghum bicolor followed by ML reconstructions of these data and previously published analyses estimated a complex pattern of gain and loss of LHS1 and OSM5 expression in different floral organs and different flowers within the spikelet or inflorescence. Conclusions Previous authors have reported that rice OSM5 and LHS1 proteins have different interaction partners indicating that the truncation of OSM5 following the LHS1/OSM5 duplication event has resulted in both partitioned and potentially novel gene functions. The complex pattern of OSM5 and LHS1 expression evolution is not consistent with a simple subfunctionalization model following the gene duplication event, but there is evidence of recent partitioning of OSM5 and LHS1 expression within different floral organs of A. sativa, C. latifolium, P. glaucum and S. bicolor, and between the upper and lower florets of the two-flowered maize spikelet. PMID:22340849
Phylogeny and biogeography of Maclura (Moraceae) and the origin of an anachronistic fruit.
Gardner, Elliot M; Sarraf, Paya; Williams, Evelyn W; Zerega, Nyree J C
2017-12-01
Maclura (ca. 12spp., Moraceae) is a widespread genus of trees and woody climbers found on five continents. Maclura pomifera, the Osage orange, is considered a classic example of an anachronistic fruit. Native to the central USA, the grapefruit-sized Osage oranges are unpalatable and have no known extant native dispersers, leading to speculation that the fruits were adapted to extinct megafauna. Our aim was to reconstruct the phylogeny, estimate divergence dates, and infer ancestral ranges of Maclura in order to test the monophyly of subgeneric classifications and to understand evolution and dispersal patterns in this globally distributed group. Employing Bayesian and maximum-likelihood methods, we reconstructed the Maclura phylogeny using two nuclear and five chloroplast loci from all Maclura species and outgroups representing all Moraceae tribes. We reconstructed ancestral ranges and syncarp sizes using a family level dated tree, and used Ornstein-Uhlenbeck models to test for significant changes in syncarp size in the Osage orange lineage. Our analyses support a monophyletic Maclura with a Paleocene crown. Subgeneric sections were monophyletic except for the geographically-disjunct Cardiogyne. There was strong support for current species delineations except in the widespread M. cochinchinensis. South America was reconstructed as the ancestral range for Maclura with subsequent colonization of Africa and the northern hemisphere. The clade containing M. pomifera likely diverged in the Oligocene, closely coinciding with crown divergence dates of the mammoth/mastodon and sloth clades that contain possible extinct dispersers. The best fitting model for syncarp size evolution indicated an increase in both syncarp size and the rate of syncarp size evolution in the Osage orange lineage. We conclude that our findings are consistent with the hypothesis that M. pomifera was adapted to dispersal by extinct megafauna. In addition, we consider dispersal rather than vicariance to be most likely responsible for the present distribution of Maclura, as crown divergence post-dated the separation of Africa and South America. We propose revised sectional delimitations based on the phylogeny. This study represents a complete phylogenetic and biogeographic analysis of this globally distributed genus and provides a basis for future work, including a taxonomic revision. Copyright © 2017. Published by Elsevier Inc.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
From sea to land and beyond – New insights into the evolution of euthyneuran Gastropoda (Mollusca)
2008-01-01
Background The Euthyneura are considered to be the most successful and diverse group of Gastropoda. Phylogenetically, they are riven with controversy. Previous morphology-based phylogenetic studies have been greatly hampered by rampant parallelism in morphological characters or by incomplete taxon sampling. Based on sequences of nuclear 18S rRNA and 28S rRNA as well as mitochondrial 16S rRNA and COI DNA from 56 taxa, we reconstructed the phylogeny of Euthyneura utilising Maximum Likelihood and Bayesian inference methods. The evolution of colonization of freshwater and terrestrial habitats by pulmonate Euthyneura, considered crucial in the evolution of this group of Gastropoda, is reconstructed with Bayesian approaches. Results We found several well supported clades within Euthyneura, however, we could not confirm the traditional classification, since Pulmonata are paraphyletic and Opistobranchia are either polyphyletic or paraphyletic with several clades clearly distinguishable. Sacoglossa appear separately from the rest of the Opisthobranchia as sister taxon to basal Pulmonata. Within Pulmonata, Basommatophora are paraphyletic and Hygrophila and Eupulmonata form monophyletic clades. Pyramidelloidea are placed within Euthyneura rendering the Euthyneura paraphyletic. Conclusion Based on the current phylogeny, it can be proposed for the first time that invasion of freshwater by Pulmonata is a unique evolutionary event and has taken place directly from the marine environment via an aquatic pathway. The origin of colonisation of terrestrial habitats is seeded in marginal zones and has probably occurred via estuaries or semi-terrestrial habitats such as mangroves. PMID:18294406
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B; Southern Medical University, Guangzhou, Guangdong; Tian, Z
Purpose: While compressed sensing-based cone-beam CT (CBCT) iterative reconstruction techniques have demonstrated tremendous capability of reconstructing high-quality images from undersampled noisy data, its long computation time still hinders wide application in routine clinic. The purpose of this study is to develop a reconstruction framework that employs modern consensus optimization techniques to achieve CBCT reconstruction on a multi-GPU platform for improved computational efficiency. Methods: Total projection data were evenly distributed to multiple GPUs. Each GPU performed reconstruction using its own projection data with a conventional total variation regularization approach to ensure image quality. In addition, the solutions from GPUs were subjectmore » to a consistency constraint that they should be identical. We solved the optimization problem with all the constraints considered rigorously using an alternating direction method of multipliers (ADMM) algorithm. The reconstruction framework was implemented using OpenCL on a platform with two Nvidia GTX590 GPU cards, each with two GPUs. We studied the performance of our method and demonstrated its advantages through a simulation case with a NCAT phantom and an experimental case with a Catphan phantom. Result: Compared with the CBCT images reconstructed using conventional FDK method with full projection datasets, our proposed method achieved comparable image quality with about one third projection numbers. The computation time on the multi-GPU platform was ∼55 s and ∼ 35 s in the two cases respectively, achieving a speedup factor of ∼ 3.0 compared with single GPU reconstruction. Conclusion: We have developed a consensus ADMM-based CBCT reconstruction method which enabled performing reconstruction on a multi-GPU platform. The achieved efficiency made this method clinically attractive.« less
Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.
dos Reis, Mario; Yang, Ziheng
2011-07-01
The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.
MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z; Qi, H; Wu, S
2016-06-15
Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotationalmore » invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74%, respectively.« less
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
ERIC Educational Resources Information Center
Klein, Andreas G.; Muthen, Bengt O.
2007-01-01
In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…
Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods
ERIC Educational Resources Information Center
Zhong, Xiaoling; Yuan, Ke-Hai
2011-01-01
In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…
Five Methods for Estimating Angoff Cut Scores with IRT
ERIC Educational Resources Information Center
Wyse, Adam E.
2017-01-01
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Expansion method in secondary total ear reconstruction for undesirable reconstructed ear.
Liu, Tun; Hu, Jintian; Zhou, Xu; Zhang, Qingguo
2014-09-01
Ear reconstruction by autologous costal cartilage grafting is the most widely applied technique with fewer complications. However, undesirable ear reconstruction brings more problems to plastic surgeons. Some authors resort to free flap or osseointegration technique with prosthetic ear. In this article, we introduce a secondary total ear reconstruction with expanded skin flap method. From July 2010 to April 2012, 7 cases of undesirable ear reconstruction were repaired by tissue expansion method. Procedures including removal of previous cartilage framework, soft tissue expander insertion, and second stage of cartilage framework insertion were performed to each case regarding their local conditions. The follow-up time ranged from 6 months to 2.5 years. All of the cases recovered well with good 3-dimensional forms, symmetrical auriculocephalic angle, and stable fixation. All these evidence showed that this novel expansion method is safe, stable, and less traumatic for secondary total ear reconstruction. With sufficient expanded skin flap and refabricated cartilage framework, lifelike appearance of reconstructed ear could be acquired without causing additional injury.
[Development and current situation of reconstruction methods following total sacrectomy].
Huang, Siyi; Ji, Tao; Guo, Wei
2018-05-01
To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.
NASA Astrophysics Data System (ADS)
Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang
2017-10-01
The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Reconstruction of Cyber and Physical Software Using Novel Spread Method
NASA Astrophysics Data System (ADS)
Ma, Wubin; Deng, Su; Huang, Hongbin
2018-03-01
Cyber and Physical software has been concerned for many years since 2010. Actually, many researchers would disagree with the deployment of traditional Spread Method for reconstruction of Cyber and physical software, which embodies the key principles reconstruction of cyber physical system. NSM(novel spread method), our new methodology for reconstruction of cyber and physical software, is the solution to all of these challenges.