Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT
NASA Astrophysics Data System (ADS)
Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee
2014-03-01
State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.
NASA Astrophysics Data System (ADS)
Ahn, Sangtae; Ross, Steven G.; Asma, Evren; Miao, Jun; Jin, Xiao; Cheng, Lishui; Wollenweber, Scott D.; Manjeshwar, Ravindra M.
2015-08-01
Ordered subset expectation maximization (OSEM) is the most widely used algorithm for clinical PET image reconstruction. OSEM is usually stopped early and post-filtered to control image noise and does not necessarily achieve optimal quantitation accuracy. As an alternative to OSEM, we have recently implemented a penalized likelihood (PL) image reconstruction algorithm for clinical PET using the relative difference penalty with the aim of improving quantitation accuracy without compromising visual image quality. Preliminary clinical studies have demonstrated visual image quality including lesion conspicuity in images reconstructed by the PL algorithm is better than or at least as good as that in OSEM images. In this paper we evaluate lesion quantitation accuracy of the PL algorithm with the relative difference penalty compared to OSEM by using various data sets including phantom data acquired with an anthropomorphic torso phantom, an extended oval phantom and the NEMA image quality phantom; clinical data; and hybrid clinical data generated by adding simulated lesion data to clinical data. We focus on mean standardized uptake values and compare them for PL and OSEM using both time-of-flight (TOF) and non-TOF data. The results demonstrate improvements of PL in lesion quantitation accuracy compared to OSEM with a particular improvement in cold background regions such as lungs.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)
NASA Astrophysics Data System (ADS)
(Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald
2017-08-01
HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies conducted in this work.
Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.
Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao
2017-01-01
Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Kinahan, P. E.; Brasse, D.; Comtat, C.; Townsend, D. W.
2002-02-01
We have evaluated the penalized ordered-subset transmission reconstruction (OSTR) algorithm for postinjection single photon transmission scanning. The OSTR algorithm of Erdogan and Fessler (1999) uses a more accurate model for transmission tomography than ordered-subsets expectation-maximization (OSEM) when OSEM is applied to the logarithm of the transmission data. The OSTR algorithm is directly applicable to postinjection transmission scanning with a single photon source, as emission contamination from the patient mimics the effect, in the original derivation of OSTR, of random coincidence contamination in a positron source transmission scan. Multiple noise realizations of simulated postinjection transmission data were reconstructed using OSTR, filtered backprojection (FBP), and OSEM algorithms. Due to the nonspecific task performance, or multiple uses, of the transmission image, multiple figures of merit were evaluated, including image noise, contrast, uniformity, and root mean square (rms) error. We show that: 1) the use of a three-dimensional (3-D) regularizing image roughness penalty with OSTR improves the tradeoffs in noise, contrast, and rms error relative to the use of a two-dimensional penalty; 2) OSTR with a 3-D penalty has improved tradeoffs in noise, contrast, and rms error relative to FBP or OSEM; and 3) the use of image standard deviation from a single realization to estimate the true noise can be misleading in the case of OSEM. We conclude that using OSTR with a 3-D penalty potentially allows for shorter postinjection transmission scans in single photon transmission tomography in positron emission tomography (PET) relative to FBP or OSEM reconstructed images with the same noise properties. This combination of singles+OSTR is particularly suitable for whole-body PET oncology imaging.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
Hattori, T; Terada, T; Hamasuna, S
1995-06-01
Osem, a rice gene homologous to the wheat Em gene, which encodes one of the late-embryogenesis abundant proteins was isolated. The gene was characterized with respect to control of transcription by abscisic acid (ABA) and the transcriptional activator VP1, which is involved in the ABA-regulated gene expression during late embryo-genesis. A fusion gene (Osem-GUS) consisting of the Osem promoter and the bacterial beta-glucuronidase (GUS) gene was constructed and tested in a transient expression system, using protoplasts derived from a suspension-cultured line of rice cells, for activation by ABA and by co-transfection with an expression vector (35S-Osvp1) for the rice VP1 (OSVP1) cDNA. The expression of Osem-GUS was strongly (40- to 150-fold) activated by externally applied ABA and by over-expression of (OS)VP1. The Osem promoter has three ACGTG-containing sequences, motif A, motif B and motif A', which resemble the abscisic acid-responsive element (ABRE) that was previously identified in the wheat Em and the rice Rab16. There is also a CATGCATG sequence, which is known as the Sph box and is shown to be essential for the regulation by VP1 of the maize anthocyanin regulatory gene C1. Focusing on these sequence elements, various mutant derivatives of the Osem promoter in the transient expression system were assayed. The analysis revealed that motif A functions not only as an ABRE but also as a sequence element required for the regulation by (OS)VP1.
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naseri, M; Rajabi, H; Wang, J
Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less
Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G
2017-11-01
Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV 1 -MTV 2 , MTV 1 -GBSV, MTV 2 -GBSV; gray-levels: 64-32, 64-128, and 64-256; reconstruction algorithms: OSEM-FORE-OSEM, OSEM-FOREFBP, and OSEM-3DRP). We used |d¯| as a measure of radiomic feature reproducibility level, where any feature scored |d¯| ±SD ≤ |25|% ± 35% was considered reproducible. We used Bland-Altman analysis to evaluate the mean, standard deviation (SD), and upper/lower reproducibility limits (U/LRL) for radiomic features in response to variation in each testing parameter. Furthermore, we proposed U/LRL as a method to classify the level of reproducibility: High- ±1% ≤ U/LRL ≤ ±30%; Intermediate- ±30% < U/LRL ≤ ±45%; Low- ±45 < U/LRL ≤ ±50%. We considered any feature below the low level as nonreproducible (NR). Finally, we calculated the interclass correlation coefficient (ICC) to evaluate the reliability of radiomic feature measurements for each parameter. The segmented volumes of 65 patients (81.3%) scored Dice coefficient >0.75 for all three volumes. The result outcomes revealed a tendency of higher radiomic feature reproducibility among segmentation pair MTV 1 -GBSV than MTV 2 -GBSV, gray-level pairs of 64-32 and 64-128 than 64-256, and reconstruction algorithm pairs of OSEM-FOREIR and OSEM-FOREFBP than OSEM-3DRP. Although the choice of cervical tumor segmentation method, gray-level value, and reconstruction algorithm may affect radiomic features, some features were characterized by high reproducibility through all testing parameters. The number of radiomic features that showed insensitivity to variations in segmentation methods, gray-level discretization, and reconstruction algorithms was 10 (13%), 4 (5%), and 1 (1%), respectively. These results suggest that a careful analysis of the effects of these parameters is essential prior to any radiomics clinical application. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja
2017-09-01
To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
Shang, Kun; Cui, Bixiao; Ma, Jie; Shuai, Dongmei; Liang, Zhigang; Jansen, Floris; Zhou, Yun; Lu, Jie; Zhao, Guoguang
2017-08-01
Hybrid positron emission tomography/magnetic resonance (PET/MR) imaging is a new multimodality imaging technology that can provide structural and functional information simultaneously. The aim of this study was to investigate the effects of the time-of-flight (TOF) and point-spread function (PSF) on small lesions observed in PET/MR images from clinical patient image sets. This study evaluated 54 small lesions in 14 patients who had undergone 18 F-fluorodeoxyglucose (FDG) PET/MR. Lesions up to 30mm in diameter were included. The PET data were reconstructed with a baseline ordered-subsets expectation-maximization (OSEM) algorithm, OSEM+PSF, OSEM+TOF and OSEM+TOF+PSF. PET image quality and small lesions were visually evaluated and scored by a 3-point scale. A quantitative analysis was then performed using the mean and maximum standardized uptake value (SUV) of the small lesions (SUV mean and SUV max ). The lesions were divided into two groups according to the long-axis diameter and the location respectively and evaluated with each reconstruction algorithm. We also evaluated the background signal by analyzing the SUV liver . OSEM+TOF+PSF provided the highest value and OSEM+TOF or PSF showed a higher value than OSEM for the visual assessment and quantitative analysis. The combination of TOF and PSF increased the SUV mean by 26.6% and the SUV max by 30.0%. The SUV liver was not influenced by PSF or TOF. For the OSEM+TOF+PSF model, the change in SUV mean and SUV max for lesions <10mm in diameter was 31.9% and 35.8%, and 24.5% and 27.6% for lesions 10-30mm in diameter, respectively. The abdominal lesions obtained the higher SUV than those of chest on the images with TOF and/or PSF. Application of TOF and PSF significantly increased the SUV of small lesions in hybrid PET/MR images, potentially improving small lesion detectability. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang
2015-04-01
PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true concentrations for radioactivity Our data collectively showed that OSEM 2D reconstruction method provides quantitatively accurate reconstructed PET data results.
[High resolution reconstruction of PET images using the iterative OSEM algorithm].
Doll, J; Henze, M; Bublitz, O; Werling, A; Adam, L E; Haberkorn, U; Semmler, W; Brix, G
2004-06-01
Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. All measurements were performed at the whole-body PET system ECAT EXACT HR(+) in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals.
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2013-10-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian
2013-10-21
Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung
2014-05-01
The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was <3.65 mm with warm background activity. % contrast and % BV with True X reconstruction were higher than those with the OSEM reconstruction algorithm without PSF modeling. In addition, the RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidtlein, CR; Beattie, B; Humm, J
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Tong, S.; Alessio, A. M.; Kinahan, P. E.
2010-03-01
The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in contrast recovery at matched ensemble noise levels and 20% improvement of quantitation SNR in unfiltered data. In addition, the relations between different metrics are studied. A linear correlation is observed between background variability and ensemble noise for all different combinations of reconstruction methods and parameters, suggesting that background variability is a reasonable surrogate for ensemble noise when multiple realizations of scans are not available.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
NASA Astrophysics Data System (ADS)
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman; Lautamäki, Riikka; Lodge, Martin A.; Bengel, Frank M.; Tsui, Benjamin M. W.
2009-05-01
The purpose of this study is to optimize the dynamic Rb-82 cardiac PET acquisition and reconstruction protocols for maximum myocardial perfusion defect detection using realistic simulation data and task-based evaluation. Time activity curves (TACs) of different organs under both rest and stress conditions were extracted from dynamic Rb-82 PET images of five normal patients. Combined SimSET-GATE Monte Carlo simulation was used to generate nearly noise-free cardiac PET data from a time series of 3D NCAT phantoms with organ activities modeling different pre-scan delay times (PDTs) and total acquisition times (TATs). Poisson noise was added to the nearly noise-free projections and the OS-EM algorithm was applied to generate noisy reconstructed images. The channelized Hotelling observer (CHO) with 32× 32 spatial templates corresponding to four octave-wide frequency channels was used to evaluate the images. The area under the ROC curve (AUC) was calculated from the CHO rating data as an index for image quality in terms of myocardial perfusion defect detection. The 0.5 cycle cm-1 Butterworth post-filtering on OS-EM (with 21 subsets) reconstructed images generates the highest AUC values while those from iteration numbers 1 to 4 do not show different AUC values. The optimized PDTs for both rest and stress conditions are found to be close to the cross points of the left ventricular chamber and myocardium TACs, which may promote an individualized PDT for patient data processing and image reconstruction. Shortening the TATs for <~3 min from the clinically employed acquisition time does not affect the myocardial perfusion defect detection significantly for both rest and stress studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
NASA Astrophysics Data System (ADS)
Karaoglanis, K.; Efthimiou, N.; Tsoumpas, C.
2015-09-01
Low count PET data is a challenge for medical image reconstruction. The statistics of a dataset is a key factor of the quality of the reconstructed images. Reconstruction algorithms which would be able to compensate for low count datasets could provide the means to reduce the patient injected doses and/or reduce the scan times. It has been shown that the use of priors improve the image quality in low count conditions. In this study we compared regularised versus post-filtered OSEM for their performance on challenging simulated low count datasets. Initial visual comparison demonstrated that both algorithms improve the image quality, although the use of regularization does not introduce the undesired blurring as post-filtering.
Optimized MLAA for quantitative non-TOF PET/MR of the brain
NASA Astrophysics Data System (ADS)
Benoit, Didier; Ladefoged, Claes N.; Rezaei, Ahmadreza; Keller, Sune H.; Andersen, Flemming L.; Højgaard, Liselotte; Hansen, Adam E.; Holm, Søren; Nuyts, Johan
2016-12-01
For quantitative tracer distribution in positron emission tomography, attenuation correction is essential. In a hybrid PET/CT system the CT images serve as a basis for generation of the attenuation map, but in PET/MR, the MR images do not have a similarly simple relationship with the attenuation map. Hence attenuation correction in PET/MR systems is more challenging. Typically either of two MR sequences are used: the Dixon or the ultra-short time echo (UTE) techniques. However these sequences have some well-known limitations. In this study, a reconstruction technique based on a modified and optimized non-TOF MLAA is proposed for PET/MR brain imaging. The idea is to tune the parameters of the MLTR applying some information from an attenuation image computed from the UTE sequences and a T1w MR image. In this MLTR algorithm, an {αj} parameter is introduced and optimized in order to drive the algorithm to a final attenuation map most consistent with the emission data. Because the non-TOF MLAA is used, a technique to reduce the cross-talk effect is proposed. In this study, the proposed algorithm is compared to the common reconstruction methods such as OSEM using a CT attenuation map, considered as the reference, and OSEM using the Dixon and UTE attenuation maps. To show the robustness and the reproducibility of the proposed algorithm, a set of 204 [18F]FDG patients, 35 [11C]PiB patients and 1 [18F]FET patient are used. The results show that by choosing an optimized value of {αj} in MLTR, the proposed algorithm improves the results compared to the standard MR-based attenuation correction methods (i.e. OSEM using the Dixon or the UTE attenuation maps), and the cross-talk and the scale problem are limited.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki
2016-02-01
The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.
Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner
NASA Astrophysics Data System (ADS)
Ram Yu, A.; Kim, Jin Su
2015-10-01
Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.
Lasnon, Charline; Majdoub, Mohamed; Lavigne, Brice; Do, Pascal; Madelaine, Jeannick; Visvikis, Dimitris; Hatt, Mathieu; Aide, Nicolas
2016-12-01
Quantification of tumour heterogeneity in PET images has recently gained interest, but has been shown to be dependent on image reconstruction. This study aimed to evaluate the impact of the EANM/EARL accreditation program on selected 18 F-FDG heterogeneity metrics. To carry out our study, we prospectively analysed 71 tumours in 60 biopsy-proven lung cancer patient acquisitions reconstructed with unfiltered point spread function (PSF) positron emission tomography (PET) images (optimised for diagnostic purposes), PSF-reconstructed images with a 7-mm Gaussian filter (PSF 7 ) chosen to meet European Association of Nuclear Medicine (EANM) 1.0 harmonising standards, and EANM Research Ltd. (EARL)-compliant ordered subset expectation maximisation (OSEM) images. Delineation was performed with fuzzy locally adaptive Bayesian (FLAB) algorithm on PSF images and reported on PSF 7 and OSEM ones, and with a 50 % standardised uptake values (SUV) max threshold (SUV max50% ) applied independently to each image. Robust and repeatable heterogeneity metrics including 1st-order [area under the curve of the cumulative histogram (CH AUC )], 2nd-order (entropy, correlation, and dissimilarity), and 3rd-order [high-intensity larger area emphasis (HILAE) and zone percentage (ZP)] textural features (TF) were statistically compared. Volumes obtained with SUV max50% were significantly smaller than FLAB-derived ones, and were significantly smaller in PSF images compared to OSEM and PSF 7 images. PSF-reconstructed images showed significantly higher SUVmax and SUVmean values, as well as heterogeneity for CH AUC , dissimilarity, correlation, and HILAE, and a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. Histological subtypes had no impact on TF distribution. No significant difference was observed between any of the considered metrics (SUV or heterogeneity features) that we extracted from OSEM and PSF 7 reconstructions. Furthermore, the distributions of TF for OSEM and PSF 7 reconstructions according to tumour volumes were similar for all ranges of volumes. PSF reconstruction with Gaussian filtering chosen to meet harmonising standards resulted in similar SUV values and heterogeneity information as compared to OSEM images, which validates its use within the harmonisation strategy context. However, unfiltered PSF-reconstructed images also showed higher heterogeneity according to some metrics, as well as a wider range of heterogeneity values than OSEM images for most of the metrics considered, especially when analysing larger tumours. This suggests that, whenever available, unfiltered PSF images should also be exploited to obtain the most discriminative quantitative heterogeneity features.
Incorporation of a two metre long PET scanner in STIR
NASA Astrophysics Data System (ADS)
Tsoumpas, C.; Brain, C.; Dyke, T.; Gold, D.
2015-09-01
The Explorer project aims to investigate the potential benefits of a total-body 2 metre long PET scanner. The following investigation incorporates this scanner in STIR library and demonstrates the capabilities and weaknesses of existing reconstruction (FBP and OSEM) and single scatter simulation algorithms. It was found that sensible images are reconstructed but at the expense of high memory and processing time demands. FBP requires 4 hours on a core; OSEM: 2 hours per iteration if ran in parallel on 15-cores of a high performance computer. The single scatter simulation algorithm shows that on a short scale, up to a fifth of the scanner length, the assumption that the scatter between direct rings is similar to the scatter between the oblique rings is approximately valid. However, for more extreme cases this assumption is not longer valid, which illustrates that consideration of the oblique rings within the single scatter simulation will be necessary, if this scatter correction is the method of choice.
Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Wadsak, Wolfgang; Savli, Markus; Kraus, Christoph; Birkfellner, Wolfgang; Ungersboeck, Johanna; Haeusler, Daniela; Mitterhauser, Markus; Karanikas, Georgios; Kasper, Siegfried; Frey, Richard; Lanzenberger, Rupert
2013-04-01
Image-derived input functions (IDIFs) represent a promising non-invasive alternative to arterial blood sampling for quantification in positron emission tomography (PET) studies. However, routine applications in patients and longitudinal designs are largely missing despite widespread attempts in healthy subjects. The aim of this study was to apply a previously validated approach to a clinical sample of patients with major depressive disorder (MDD) before and after electroconvulsive therapy (ECT). Eleven scans from 5 patients with venous blood sampling were obtained with the radioligand [carbonyl-(11)C]WAY-100635 at baseline, before and after 11.0±1.2 ECT sessions. IDIFs were defined by two different image reconstruction algorithms 1) OSEM with subsequent partial volume correction (OSEM+PVC) and 2) reconstruction based modelling of the point spread function (TrueX). Serotonin-1A receptor (5-HT1A) binding potentials (BPP, BPND) were quantified with a two-tissue compartment (2TCM) and reference region model (MRTM2). Compared to MRTM2, good agreement in 5-HT1A BPND was found when using input functions from OSEM+PVC (R(2)=0.82) but not TrueX (R(2)=0.57, p<0.001), which is further reflected by lower IDIF peaks for TrueX (p<0.001). Following ECT, decreased 5-HT1A BPND and BPP were found with the 2TCM using OSEM+PVC (23%-35%), except for one patient showing only subtle changes. In contrast, MRTM2 and IDIFs from TrueX gave unstable results for this patient, most probably due to a 2.4-fold underestimation of non-specific binding. Using image-derived and venous input functions defined by OSEM with subsequent PVC we confirm previously reported decreases in 5-HT1A binding in MDD patients after ECT. In contrast to reference region modeling, quantification with image-derived input functions showed consistent results in a clinical setting due to accurate modeling of non-specific binding with OSEM+PVC. Copyright © 2013 Elsevier Inc. All rights reserved.
Slow-rotation dynamic SPECT with a temporal second derivative constraint.
Humphries, T; Celler, A; Trummer, M
2011-08-01
Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
NASA Astrophysics Data System (ADS)
Baghaei, H.; Wong, Wai-Hoi; Uribe, J.; Li, Hongdi; Wang, Yu; Liu, Yaqiang; Xing, Tao; Ramirez, R.; Xie, Shuping; Kim, Soonseok
2004-10-01
We compared two fully three-dimensional (3-D) image reconstruction algorithms and two 3-D rebinning algorithms followed by reconstruction with a two-dimensional (2-D) filtered-backprojection algorithm for 3-D positron emission tomography (PET) imaging. The two 3-D image reconstruction algorithms were ordered-subsets expectation-maximization (3D-OSEM) and 3-D reprojection (3DRP) algorithms. The two rebinning algorithms were Fourier rebinning (FORE) and single slice rebinning (SSRB). The 3-D projection data used for this work were acquired with a high-resolution PET scanner (MDAPET) with an intrinsic transaxial resolution of 2.8 mm. The scanner has 14 detector rings covering an axial field-of-view of 38.5 mm. We scanned three phantoms: 1) a uniform cylindrical phantom with inner diameter of 21.5 cm; 2) a uniform 11.5-cm cylindrical phantom with four embedded small hot lesions with diameters of 3, 4, 5, and 6 mm; and 3) the 3-D Hoffman brain phantom with three embedded small hot lesion phantoms with diameters of 3, 5, and 8.6 mm in a warm background. Lesions were placed at different radial and axial distances. We evaluated the different reconstruction methods for MDAPET camera by comparing the noise level of images, contrast recovery, and hot lesion detection, and visually compared images. We found that overall the 3D-OSEM algorithm, especially when images post filtered with the Metz filter, produced the best results in terms of contrast-noise tradeoff, and detection of hot spots, and reproduction of brain phantom structures. Even though the MDAPET camera has a relatively small maximum axial acceptance (/spl plusmn/5 deg), images produced with the 3DRP algorithm had slightly better contrast recovery and reproduced the structures of the brain phantom slightly better than the faster 2-D rebinning methods.
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-01-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing
2016-08-01
In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.
SPECT reconstruction with nonuniform attenuation from highly under-sampled projection data
NASA Astrophysics Data System (ADS)
Li, Cuifen; Wen, Junhai; Zhang, Kangping; Shi, Donghao; Dong, Haixiang; Li, Wenxiao; Liang, Zhengrong
2012-03-01
Single photon emission computed tomography (SPECT) is an important nuclear medicine imaging technique and has been using in clinical diagnoses. The SPECT image can reflect not only organizational structure but also functional activities of human body, therefore diseases can be found much earlier. In SPECT, the reconstruction is based on the measurement of gamma photons emitted by the radiotracer. The number of gamma photons detected is proportional to the dose of radiopharmaceutical, but the dose is limited because of patient safety. There is an upper limit in the number of gamma photons that can be detected per unit time, so it takes a long time to acquire SPECT projection data. Sometimes we just can obtain highly under-sampled projection data because of the limit of the scanning time or imaging hardware. How to reconstruct an image using highly under-sampled projection data is an interesting problem. One method is to minimize the total variation (TV) of the reconstructed image during the iterative reconstruction. In this work, we developed an OSEM-TV SPECT reconstruction algorithm, which could reconstruct the image from highly under-sampled projection data with non-uniform attenuation. Simulation results demonstrate that the OSEM-TV algorithm performs well in SPECT reconstruction with non-uniform attenuation.
LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation
NASA Astrophysics Data System (ADS)
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2015-01-01
Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.
Chou, Cheng-Ying; Dong, Yun; Hung, Yukai; Kao, Yu-Jiun; Wang, Weichung; Kao, Chien-Min; Chen, Chin-Tu
2012-01-01
Positron emission tomography (PET) is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU), NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM) image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.
Resolution recovery for Compton camera using origin ensemble algorithm.
Andreyev, A; Celler, A; Ozsahin, I; Sitek, A
2016-08-01
Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. To validate the proposed algorithm we used Monte Carlo simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2-3 orders of magnitude per iteration. The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, A.
Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantari, F; Wang, J; Li, T
2015-06-15
Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Castro, P; Huerga, C; Chamorro, P; Garayoa, J; Roch, M; Pérez, L
2018-04-17
The goals of the study are to characterize imaging properties in 2D PET images reconstructed with the iterative algorithm ordered-subset expectation maximization (OSEM) and to propose a new method for the generation of synthetic images. The noise is analyzed in terms of its magnitude, spatial correlation, and spectral distribution through standard deviation, autocorrelation function, and noise power spectrum (NPS), respectively. Their variations with position and activity level are also analyzed. This noise analysis is based on phantom images acquired from 18 F uniform distributions. Experimental recovery coefficients of hot spheres in different backgrounds are employed to study the spatial resolution of the system through point spread function (PSF). The NPS and PSF functions provide the baseline for the proposed simulation method: convolution with PSF as kernel and noise addition from NPS. The noise spectral analysis shows that the main contribution is of random nature. It is also proven that attenuation correction does not alter noise texture but it modifies its magnitude. Finally, synthetic images of 2 phantoms, one of them an anatomical brain, are quantitatively compared with experimental images showing a good agreement in terms of pixel values and pixel correlations. Thus, the contrast to noise ratio for the biggest sphere in the NEMA IEC phantom is 10.7 for the synthetic image and 8.8 for the experimental image. The properties of the analyzed OSEM-PET images can be described by NPS and PSF functions. Synthetic images, even anatomical ones, are successfully generated by the proposed method based on the NPS and PSF. Copyright © 2018 Sociedad Española de Medicina Nuclear e Imagen Molecular. Publicado por Elsevier España, S.L.U. All rights reserved.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.
Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P
2018-01-04
Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.
PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering
NASA Astrophysics Data System (ADS)
Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.
2016-02-01
Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.
Phantom experiments to improve parathyroid lesion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Kenneth J.; Tronco, Gene G.; Tomas, Maria B.
2007-12-15
This investigation tested the hypothesis that visual analysis of iteratively reconstructed tomograms by ordered subset expectation maximization (OSEM) provides the highest accuracy for localizing parathyroid lesions using {sup 99m}Tc-sestamibi SPECT data. From an Institutional Review Board approved retrospective review of 531 patients evaluated for parathyroid localization, image characteristics were determined for 85 {sup 99m}Tc-sestamibi SPECT studies originally read as equivocal (EQ). Seventy-two plexiglas phantoms using cylindrical simulated lesions were acquired for a clinically realistic range of counts (mean simulated lesion counts of 75{+-}50 counts/pixel) and target-to-background (T:B) ratios (range=2.0 to 8.0) to determine an optimal filter for OSEM. Two experiencedmore » nuclear physicians graded simulated lesions, blinded to whether chambers contained radioactivity or plain water, and two observers used the same scale to read all phantom and clinical SPECT studies, blinded to pathology findings and clinical information. For phantom data and all clinical data, T:B analyses were not statistically different for OSEM versus FB, but visual readings were significantly more accurate than T:B (88{+-}6% versus 68{+-}6%, p=0.001) for OSEM processing, and OSEM was significantly more accurate than FB for visual readings (88{+-}6% versus 58{+-}6%, p<0.0001). These data suggest that visual analysis of iteratively reconstructed MIBI tomograms should be incorporated into imaging protocols performed to localize parathyroid lesions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, D; Jung, J; Suh, T
2014-06-01
Purpose: Purpose of paper is to confirm the feasibility of acquisition of three dimensional single photon emission computed tomography (SPECT) image from boron neutron capture therapy (BNCT) using Monte Carlo simulation. Methods: In case of simulation, the pixelated SPECT detector, collimator and phantom were simulated using Monte Carlo n particle extended (MCNPX) simulation tool. A thermal neutron source (<1 eV) was used to react with the boron uptake region (BUR) in the phantom. Each geometry had a spherical pattern, and three different BURs (A, B and C region, density: 2.08 g/cm3) were located in the middle of the brain phantom.more » The data from 128 projections for each sorting process were used to achieve image reconstruction. The ordered subset expectation maximization (OSEM) reconstruction algorithm was used to obtain a tomographic image with eight subsets and five iterations. The receiver operating characteristic (ROC) curve analysis was used to evaluate the geometric accuracy of reconstructed image. Results: The OSEM image was compared with the original phantom pattern image. The area under the curve (AUC) was calculated as the gross area under each ROC curve. The three calculated AUC values were 0.738 (A region), 0.623 (B region), and 0.817 (C region). The differences between length of centers of two boron regions and distance of maximum count points were 0.3 cm, 1.6 cm and 1.4 cm. Conclusion: The possibility of extracting a 3D BNCT SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The prospects for obtaining an actual BNCT SPECT image were estimated from the quality of the simulated image and the simulation conditions. When multiple tumor region should be treated using the BNCT, a reasonable model to determine how many useful images can be obtained from the SPECT could be provided to the BNCT facilities. This research was supported by the Leading Foreign Research Institute Recruitment Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, Information and Communication Technologies (ICT) and Future Planning (MSIP)(Grant No.200900420) and the Radiation Technology Research and Development program (Grant No.2013043498), Republic of Korea.« less
Absolute quantification of myocardial blood flow with 13N-ammonia and 3-dimensional PET.
Schepis, Tiziano; Gaemperli, Oliver; Treyer, Valerie; Valenta, Ines; Burger, Cyrill; Koepfli, Pascal; Namdar, Mehdi; Adachi, Itaru; Alkadhi, Hatem; Kaufmann, Philipp A
2007-11-01
The aim of this study was to compare 2-dimensional (2D) and 3-dimensional (3D) dynamic PET for the absolute quantification of myocardial blood flow (MBF) with (13)N-ammonia ((13)N-NH(3)). 2D and 3D MBF measurements were collected from 21 patients undergoing cardiac evaluation at rest (n = 14) and during standard adenosine stress (n = 7). A lutetium yttrium oxyorthosilicate-based PET/CT system with retractable septa, enabling the sequential acquisition of 2D and 3D images within the same patient and study, was used. All 2D studies were performed by injecting 700-900 MBq of (13)N-NH(3). For 14 patients, 3D studies were performed with the same injected (13)N-NH(3) dose as that used in 2D studies. For the remaining 7 patients, 3D images were acquired with a lower dose of (13)N-NH(3), that is, 500 MBq. 2D images reconstructed by use of filtered backprojection (FBP) provided the reference standard for MBF measurements. 3D images were reconstructed by use of Fourier rebinning (FORE) with FBP (FORE-FBP), FORE with ordered-subsets expectation maximization (FORE-OSEM), and a reprojection algorithm (RP). Global MBF measurements derived from 3D PET with FORE-FBP (r = 0.97), FORE-OSEM (r = 0.97), and RP (r = 0.97) were well correlated with those derived from 2D FBP (all Ps < 0.0001). The mean +/- SD differences in global MBF measurements between 3D FORE-FBP and 2D FBP and between 3D FORE-OSEM and 2D FBP were 0.01 +/- 0.14 and 0.01 +/- 0.15 mL/min/g, respectively. The mean +/- SD difference in global MBF measurements between 3D RP and 2D FBP was 0.00 +/- 0.16 mL/min/g. The best correlation between 2D PET and 3D PET performed with the lower injected activity was found for the 3D FORE-FBP reconstruction algorithm (r = 0.95, P < 0.001). For this scanner type, quantitative measurements of MBF with 3D PET and (13)N-NH(3) were in excellent agreement with those obtained with the 2D technique, even when a lower activity was injected.
Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G
2014-12-01
F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
NASA Astrophysics Data System (ADS)
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-01
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frédéric
2018-02-13
Respiratory motion reduces both the qualitative and quantitative accuracy of PET images in oncology. This impact is more significant for quantitative applications based on kinetic modeling, where dynamic acquisitions are associated with limited statistics due to the necessity of enhanced temporal resolution. The aim of this study is to address these drawbacks, by combining a respiratory motion correction approach with temporal regularization in a unique reconstruction algorithm for dynamic PET imaging. Elastic transformation parameters for the motion correction are estimated from the non-attenuation-corrected PET images. The derived displacement matrices are subsequently used in a list-mode based OSEM reconstruction algorithm integrating a temporal regularization between the 3D dynamic PET frames, based on temporal basis functions. These functions are simultaneously estimated at each iteration, along with their relative coefficients for each image voxel. Quantitative evaluation has been performed using dynamic FDG PET/CT acquisitions of lung cancer patients acquired on a GE DRX system. The performance of the proposed method is compared with that of a standard multi-frame OSEM reconstruction algorithm. The proposed method achieved substantial improvements in terms of noise reduction while accounting for loss of contrast due to respiratory motion. Results on simulated data showed that the proposed 4D algorithms led to bias reduction values up to 40% in both tumor and blood regions for similar standard deviation levels, in comparison with a standard 3D reconstruction. Patlak parameter estimations on reconstructed images with the proposed reconstruction methods resulted in 30% and 40% bias reduction in the tumor and lung region respectively for the Patlak slope, and a 30% bias reduction for the intercept in the tumor region (a similar Patlak intercept was achieved in the lung area). Incorporation of the respiratory motion correction using an elastic model along with a temporal regularization in the reconstruction process of the PET dynamic series led to substantial quantitative improvements and motion artifact reduction. Future work will include the integration of a linear FDG kinetic model, in order to directly reconstruct parametric images.
NASA Astrophysics Data System (ADS)
Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard
2000-04-01
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
Sowa-Staszczak, Anna; Lenda-Tracz, Wioletta; Tomaszuk, Monika; Głowa, Bogusław; Hubalewska-Dydejczyk, Alicja
2013-01-01
Somatostatin receptor scintigraphy (SRS) is a useful tool in the assessment of GEP-NET (gastroenteropancreatic neuroendocrine tumor) patients. The choice of appropriate settings of image reconstruction parameters is crucial in interpretation of these images. The aim of the study was to investigate how the GEP NET lesion signal to noise ratio (TCS/TCB) depends on different reconstruction settings for Flash 3D software (Siemens). SRS results of 76 randomly selected patients with confirmed GEP-NET were analyzed. For SPECT studies the data were acquired using standard clinical settings 3-4 h after the injection of 740 MBq 99mTc-[EDDA/HYNIC] octreotate. To obtain final images the OSEM 3D Flash reconstruction with different settings and FBP reconstruction were used. First, the TCS/TCB ratio in voxels was analyzed for different combinations of the number of subsets and the number of iterations of the OSEM 3D Flash reconstruction. Secondly, the same ratio was analyzed for different parameters of the Gaussian filter (with FWHM = 2-4 times greater from the pixel size). Also the influence of scatter correction on the TCS/TCB ratio was investigated. With increasing number of subsets and iterations, the increase of TCS/TCB ratio was observed. With increasing settings of Gauss [FWHM coefficient] filter, the decrease of TCS/TCB ratio was reported. The use of scatter correction slightly decreases the values of this ratio. OSEM algorithm provides a meaningfully better reconstruction of the SRS SPECT study as compared to the FBP technique. A high number of subsets improves image quality (images are smoother). Increasing number of iterations gives a better contrast and the shapes of lesions and organs are sharper. The choice of reconstruction parameters is a compromise between image qualitative appearance and its quantitative accuracy and should not be modified when comparing multiple studies of the same patient.
Progress in SPECT/CT imaging of prostate cancer.
Seo, Youngho; Franc, Benjamin L; Hawkins, Randall A; Wong, Kenneth H; Hasegawa, Bruce H
2006-08-01
Prostate cancer is the most common type of cancer (other than skin cancer) among men in the United States. Although prostate cancer is one of the few cancers that grow so slowly that it may never threaten the lives of some patients, it can be lethal once metastasized. Indium-111 capromab pendetide (ProstaScint, Cytogen Corporation, Princeton, NJ) imaging is indicated for staging and recurrence detection of the disease, and is particularly useful to determine whether or not the disease has spread to distant metastatic sites. However, the interpretation of 111In-capromab pendetide is challenging without correlated structural information mostly because the radiopharmaceutical demonstrates nonspecific uptake in the normal vasculature, bowel, bone marrow, and the prostate gland. We developed an improved method of imaging and localizing 111In-Capromab pendetide using a SPECT/CT imaging system. The specific goals included: i) development and application of a novel iterative SPECT reconstruction algorithm that utilizes a priori information from coregistered CT; and ii) assessment of clinical impact of adding SPECT/CT for prostate cancer imaging with capromab pendetide utilizing the standard and novel reconstruction techniques. Patient imaging studies with capromab pendetide were performed from 1999 to 2004 using two different SPECT/CT scanners, a prototype SPECT/CT system and a commercial SPECT/CT system (Discovery VH, GE Healthcare, Waukesha, WI). SPECT projection data from both systems were reconstructed using an experimental iterative algorithm that compensates for both photon attenuation and collimator blurring. In addition, the data obtained from the commercial system were reconstructed with attenuation correction using an OSEM reconstruction supplied by the camera manufacturer for routine clinical interpretation. For 12 sets of patient data, SPECT images reconstructed using the experimental algorithm were interpreted separately and compared with interpretation of images obtained using the standard reconstruction technique. The experimental reconstruction algorithm improved spatial resolution, reduced streak artifacts, and yielded a better correlation with anatomic details of CT in comparison to conventional reconstruction methods (e.g., filtered back-projection or OSEM with attenuation correction only). Images produced with the experimental algorithm produced a subjective improvement in the confidence of interpretation for 11 of 12 studies. There were also changes in interpretations for 4 of 12 studies although the changes were not sufficient to alter prognosis or the patient treatment plan.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors
NASA Astrophysics Data System (ADS)
Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.
2010-10-01
The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
NASA Astrophysics Data System (ADS)
O'Connor, J. Michael; Pretorius, P. Hendrik; Gifford, Howard C.; Licho, Robert; Joffe, Samuel; McGuiness, Matthew; Mehurg, Shannon; Zacharias, Michael; Brankov, Jovan G.
2012-02-01
Our previous Single Photon Emission Computed Tomography (SPECT) myocardial perfusion imaging (MPI) research explored the utility of numerical observers. We recently created two hundred and eighty simulated SPECT cardiac cases using Dynamic MCAT (DMCAT) and SIMIND Monte Carlo tools. All simulated cases were then processed with two reconstruction methods: iterative ordered subset expectation maximization (OSEM) and filtered back-projection (FBP). Observer study sets were assembled for both OSEM and FBP methods. Five physicians performed an observer study on one hundred and seventy-nine images from the simulated cases. The observer task was to indicate detection of any myocardial perfusion defect using the American Society of Nuclear Cardiology (ASNC) 17-segment cardiac model and the ASNC five-scale rating guidelines. Human observer Receiver Operating Characteristic (ROC) studies established the guidelines for the subsequent evaluation of numerical model observer (NO) performance. Several NOs were formulated and their performance was compared with the human observer performance. One type of NO was based on evaluation of a cardiac polar map that had been pre-processed using a gradient-magnitude watershed segmentation algorithm. The second type of NO was also based on analysis of a cardiac polar map but with use of a priori calculated average image derived from an ensemble of normal cases.
NASA Astrophysics Data System (ADS)
Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom
2015-02-01
Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.
Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro
2018-03-01
The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
NASA Astrophysics Data System (ADS)
Morrow, Andrew N.; Matthews, Kenneth L., II; Bujenovic, Steven
2008-03-01
Positron emission tomography (PET) and computed tomography (CT) together are a powerful diagnostic tool, but imperfect image quality allows false positive and false negative diagnoses to be made by any observer despite experience and training. This work investigates PET acquisition mode, reconstruction method and a standard uptake value (SUV) correction scheme on the classification of lesions as benign or malignant in PET/CT images, in an anthropomorphic phantom. The scheme accounts for partial volume effect (PVE) and PET resolution. The observer draws a region of interest (ROI) around the lesion using the CT dataset. A simulated homogenous PET lesion of the same shape as the drawn ROI is blurred with the point spread function (PSF) of the PET scanner to estimate the PVE, providing a scaling factor to produce a corrected SUV. Computer simulations showed that the accuracy of the corrected PET values depends on variations in the CT-drawn boundary and the position of the lesion with respect to the PET image matrix, especially for smaller lesions. Correction accuracy was affected slightly by mismatch of the simulation PSF and the actual scanner PSF. The receiver operating characteristic (ROC) study resulted in several observations. Using observer drawn ROIs, scaled tumor-background ratios (TBRs) more accurately represented actual TBRs than unscaled TBRs. For the PET images, 3D OSEM outperformed 2D OSEM, 3D OSEM outperformed 3D FBP, and 2D OSEM outperformed 2D FBP. The correction scheme significantly increased sensitivity and slightly increased accuracy for all acquisition and reconstruction modes at the cost of a small decrease in specificity.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Phase Diversity and Polarization Augmented Techniques for Active Imaging
2007-03-01
build up a system model for use in algorithm development. 32 IV. Conventional Imaging and Atmospheric Turbulence With an understanding of scalar...28, 59, 115 Cholesky Factorization, 14, 42 C2n, see Turbulence Coherent Image Model, 36 Complete Data, see EM Algorithm Complex Coherence...Data, see EM Algorithm Homotopic, 62 Impulse Response, 34, 44 Incoherent Image Model, 36 Incomplete Data, see EM Algorithm Lo- Turbulence Outer Scale
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
de Barros, Pietro Paolo; Metello, Luis F.; Camozzato, Tatiane Sabriela Cagol; Vieira, Domingos Manuel da Silva
2015-01-01
Objective The present study is aimed at contributing to identify the most appropriate OSEM parameters to generate myocardial perfusion imaging reconstructions with the best diagnostic quality, correlating them with patients’ body mass index. Materials and Methods The present study included 28 adult patients submitted to myocardial perfusion imaging in a public hospital. The OSEM method was utilized in the images reconstruction with six different combinations of iterations and subsets numbers. The images were analyzed by nuclear cardiology specialists taking their diagnostic value into consideration and indicating the most appropriate images in terms of diagnostic quality. Results An overall scoring analysis demonstrated that the combination of four iterations and four subsets has generated the most appropriate images in terms of diagnostic quality for all the classes of body mass index; however, the role played by the combination of six iterations and four subsets is highlighted in relation to the higher body mass index classes. Conclusion The use of optimized parameters seems to play a relevant role in the generation of images with better diagnostic quality, ensuring the diagnosis and consequential appropriate and effective treatment for the patient. PMID:26543282
Razifar, Pasha; Lubberink, Mark; Schneider, Harald; Långström, Bengt; Bengtsson, Ewert; Bergström, Mats
2005-05-13
BACKGROUND: Positron emission tomography (PET) is a powerful imaging technique with the potential of obtaining functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules in a biological system, both in vitro and in vivo. PET images can be used directly or after kinetic modelling to extract quantitative values of a desired physiological, biochemical or pharmacological entity. Because such images are generally noisy, it is essential to understand how noise affects the derived quantitative values. A pre-requisite for this understanding is that the properties of noise such as variance (magnitude) and texture (correlation) are known. METHODS: In this paper we explored the pattern of noise correlation in experimentally generated PET images, with emphasis on the angular dependence of correlation, using the autocorrelation function (ACF). Experimental PET data were acquired in 2D and 3D acquisition mode and reconstructed by analytical filtered back projection (FBP) and iterative ordered subsets expectation maximisation (OSEM) methods. The 3D data was rebinned to a 2D dataset using FOurier REbinning (FORE) followed by 2D reconstruction using either FBP or OSEM. In synthetic images we compared the ACF results with those from covariance matrix. The results were illustrated as 1D profiles and also visualized as 2D ACF images. RESULTS: We found that the autocorrelation images from PET data obtained after FBP were not fully rotationally symmetric or isotropic if the object deviated from a uniform cylindrical radioactivity distribution. In contrast, similar autocorrelation images obtained after OSEM reconstruction were isotropic even when the phantom was not circular. Simulations indicated that the noise autocorrelation is non-isotropic in images created by FBP when the level of noise in projections is angularly variable. Comparison between 1D cross profiles on autocorrelation images obtained by FBP reconstruction and covariance matrices produced almost identical results in a simulation study. CONCLUSION: With asymmetric radioactivity distribution in PET, reconstruction using FBP, in contrast to OSEM, generates images in which the noise correlation is non-isotropic when the noise magnitude is angular dependent, such as in objects with asymmetric radioactivity distribution. In this respect, iterative reconstruction is superior since it creates isotropic noise correlations in the images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix
NASA Astrophysics Data System (ADS)
Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo
2008-10-01
The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.
ERIC Educational Resources Information Center
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
ERIC Educational Resources Information Center
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
& Simulation Research Interests Remote Sensing Natural Resource Modeling Machine Learning Education Analysis Center. Areas of Expertise Geospatial Analysis Data Visualization Algorithm Development Modeling
Optimized 3D stitching algorithm for whole body SPECT based on transition error minimization (TEM)
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-02-01
Standard Single Photon Emission Computed Tomography (SPECT) has a limited field of view (FOV) and cannot provide a 3D image of an entire long whole body SPECT. To produce a 3D whole body SPECT image, two to five overlapped SPECT FOVs from head to foot are acquired and assembled using image stitching. Most commercial software from medical imaging manufacturers applies a direct mid-slice stitching method to avoid blurring or ghosting from 3D image blending. Due to intensity changes across the middle slice of overlapped images, direct mid-slice stitching often produces visible seams in the coronal and sagittal views and maximal intensity projection (MIP). In this study, we proposed an optimized algorithm to reduce the visibility of stitching edges. The new algorithm computed, based on transition error minimization (TEM), a 3D stitching interface between two overlapped 3D SPECT images. To test the suggested algorithm, four studies of 2-FOV whole body SPECT were used and included two different reconstruction methods (filtered back projection (FBP) and ordered subset expectation maximization (OSEM)) as well as two different radiopharmaceuticals (Tc-99m MDP for bone metastases and I-131 MIBG for neuroblastoma tumors). Relative transition errors of stitched whole body SPECT using mid-slice stitching and the TEM-based algorithm were measured for objective evaluation. Preliminary experiments showed that the new algorithm reduced the visibility of the stitching interface in the coronal, sagittal, and MIP views. Average relative transition errors were reduced from 56.7% of mid-slice stitching to 11.7% of TEM-based stitching. The proposed algorithm also avoids blurring artifacts by preserving the noise properties of the original SPECT images.
Integration program, developing inverse modeling algorithms to calibrate building energy models, and is part related equipment. This work included developing an engineering grade operator training simulator for an
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
Battery Control Algorithms | Transportation Research | NREL
publications. Accounting for Lithium-Ion Battery Degradation in Electric Vehicle Charging Optimization Advanced Reformulation of Lithium-Ion Battery Models for Enabling Electric Transportation Fail-Safe Design for Large Capacity Lithium-Ion Battery Systems Contact Ying Shi Email | 303-275-4240
A quantitative reconstruction software suite for SPECT imaging
NASA Astrophysics Data System (ADS)
Namías, Mauro; Jeraj, Robert
2017-11-01
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
Distributed Wind Research | Wind | NREL
evaluation, and improve wind turbine and wind power plant performance. A photo of a snowy road leading to a single wind turbine surrounded by snow-covered pine trees against blue sky. Capabilities NREL's power plant and small wind turbine development. Algorithms and programs exist for simulating, designing
Energy Systems Integration News | Energy Systems Integration Facility |
hierarchical control architecture that enables a hybrid control approach, where centralized control systems will be complemented by distributed control algorithms for solar inverters and autonomous control of ), involves developing a novel control scheme that provides system-wide monitoring and control using a small
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
Golden Rays - March 2017 | Solar Research | NREL
, test and deploy a data enhanced hierarchical control architecture that adopts a hybrid approach to grid control. A centralized control layer will be complemented by distributed control algorithms for solar inverters and autonomous control of grid edge devices. The other NREL project will develop a novel control
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
NASA Astrophysics Data System (ADS)
David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias
2012-03-01
Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Solar Energy Innovation Network | Solar Research | NREL
Coordinated Control Algorithms for Distributed Battery Energy Storage Systems to Provide Grid Support Services local governments, nonprofits, innovative companies, and system operators-with analytical support from a Affordability of Renewable Energy through Options Analysis and Systems Design (or "Options Analysis"
DIY Solar Market Analysis Webinar Series: PVWatts® | State, Local, and
, and updates the energy prediction algorithms to be in line with the actual performance of modern the latest update." In this webinar, one of the tool's developers explains how the new version of ® Wednesday, July 9, 2014 As part of a Do-It-Yourself Solar Market Analysis summer series, NREL's Solar
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
architectures. Crowlely's group has designed and implemented new methods and algorithms specifically for biomass , Crowley developed highly parallel methods for simulations of bio-macromolecules. Affiliated Research advanced sampling methods, Crowley and his team determine free energies such as binding of substrates
Materials Discovery | Photovoltaic Research | NREL
and specialized analysis algorithms. The Center for Next Generation of Materials by Design (CNGMD) is , incorporating metastable materials into predictive design, and developing theory to guide materials synthesis design, accuracy and relevance, metastability, and synthesizability-to make computational materials
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Materials Discovery | Materials Science | NREL
measurement methods and specialized analysis algorithms. Projects Basic Research The basic research projects applications using high-throughput combinatorial research methods. Email | 303-384-6467 Photo of John Perkins
Staff | Computational Science | NREL
develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High
Power Systems Design and Studies | Grid Modernization | NREL
Design and Studies Power Systems Design and Studies NREL develops new tools, algorithms, and market design and performance evaluations; and planning, operations, and protection studies. Photo of two researchers looking at a screen showing a distribution grid map Current design and planning tools for the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
High-Performance Algorithms and Complex Fluids | Computational Science |
only possible by combining experimental data with simulation. Capabilities Capabilities include: Block -laden, non-Newtonian, as well as traditional internal and external flows. Contact Ray Grout Group
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Generalized PSF modeling for optimized quantitation in PET imaging.
Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman
2017-06-21
Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.
NASA Astrophysics Data System (ADS)
Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.
2004-06-01
This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a PH collimator with similar spatial resolution for myocardial SPECT imaging.
Solar Accuracy to the 3/10000 Degree - Continuum Magazine | NREL
Laboratory, where he has developed Solar Position Algorithm software. Photo by Dennis Schroeder, NREL Solar -Pyrheliometer Comparison (NPC), on the deck of NREL's Solar Radiation Research Laboratory. Photo by Dennis
Microgrids | Grid Modernization | NREL
algorithms for microgrid integration Controller hardware-in-the-loop testing, where the physical controller interacts with a model of the microgrid and associated power devices Power hardware-in-the-loop testing of operation was validated in a power hardware-in-the-loop experiment using a programmable DC power supply to
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
3D tomographic imaging with the γ-eye planar scintigraphic gamma camera
NASA Astrophysics Data System (ADS)
Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.
2017-11-01
γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas
2015-06-01
We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.
Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho
2017-10-01
This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.
Monte Carlo simulation of PET and SPECT imaging of {sup 90}Y
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Akihiko, E-mail: takahsr@hs.med.kyushu-u.ac.jp; Sasaki, Masayuki; Himuro, Kazuhiko
2015-04-15
Purpose: Yittrium-90 ({sup 90}Y) is traditionally thought of as a pure beta emitter, and is used in targeted radionuclide therapy, with imaging performed using bremsstrahlung single-photon emission computed tomography (SPECT). However, because {sup 90}Y also emits positrons through internal pair production with a very small branching ratio, positron emission tomography (PET) imaging is also available. Because of the insufficient image quality of {sup 90}Y bremsstrahlung SPECT, PET imaging has been suggested as an alternative. In this paper, the authors present the Monte Carlo-based simulation–reconstruction framework for {sup 90}Y to comprehensively analyze the PET and SPECT imaging techniques and to quantitativelymore » consider the disadvantages associated with them. Methods: Our PET and SPECT simulation modules were developed using Monte Carlo simulation of Electrons and Photons (MCEP), developed by Dr. S. Uehara. PET code (MCEP-PET) generates a sinogram, and reconstructs the tomography image using a time-of-flight ordered subset expectation maximization (TOF-OSEM) algorithm with attenuation compensation. To evaluate MCEP-PET, simulated results of {sup 18}F PET imaging were compared with the experimental results. The results confirmed that MCEP-PET can simulate the experimental results very well. The SPECT code (MCEP-SPECT) models the collimator and NaI detector system, and generates the projection images and projection data. To save the computational time, the authors adopt the prerecorded {sup 90}Y bremsstrahlung photon data calculated by MCEP. The projection data are also reconstructed using the OSEM algorithm. The authors simulated PET and SPECT images of a water phantom containing six hot spheres filled with different concentrations of {sup 90}Y without background activity. The amount of activity was 163 MBq, with an acquisition time of 40 min. Results: The simulated {sup 90}Y-PET image accurately simulated the experimental results. PET image is visually superior to SPECT image because of the low background noise. The simulation reveals that the detected photon number in SPECT is comparable to that of PET, but the large fraction (approximately 75%) of scattered and penetration photons contaminates SPECT image. The lower limit of {sup 90}Y detection in SPECT image was approximately 200 kBq/ml, while that in PET image was approximately 100 kBq/ml. Conclusions: By comparing the background noise level and the image concentration profile of both the techniques, PET image quality was determined to be superior to that of bremsstrahlung SPECT. The developed simulation codes will be very useful in the future investigations of PET and bremsstrahlung SPECT imaging of {sup 90}Y.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.
ERIC Educational Resources Information Center
Thomas, Neal
Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
3D forward modeling and response analysis for marine CSEMs towed by two ships
NASA Astrophysics Data System (ADS)
Zhang, Bo; Yin, Chang-Chun; Liu, Yun-He; Ren, Xiu-Yan; Qi, Yan-Fu; Cai, Jing
2018-03-01
A dual-ship-towed marine electromagnetic (EM) system is a new marine exploration technology recently being developed in China. Compared with traditional marine EM systems, the new system tows the transmitters and receivers using two ships, rendering it unnecessary to position EM receivers at the seafloor in advance. This makes the system more flexible, allowing for different configurations (e.g., in-line, broadside, and azimuthal and concentric scanning) that can produce more detailed underwater structural information. We develop a three-dimensional goal-oriented adaptive forward modeling method for the new marine EM system and analyze the responses for four survey configurations. Oceanbottom topography has a strong effect on the marine EM responses; thus, we develop a forward modeling algorithm based on the finite-element method and unstructured grids. To satisfy the requirements for modeling the moving transmitters of a dual-ship-towed EM system, we use a single mesh for each of the transmitter locations. This mitigates the mesh complexity by refining the grids near the transmitters and minimizes the computational cost. To generate a rational mesh while maintaining the accuracy for single transmitter, we develop a goal-oriented adaptive method with separate mesh refinements for areas around the transmitting source and those far away. To test the modeling algorithm and accuracy, we compare the EM responses calculated by the proposed algorithm and semi-analytical results and from published sources. Furthermore, by analyzing the EM responses for four survey configurations, we are confirm that compared with traditional marine EM systems with only in-line array, a dual-ship-towed marine system can collect more data.
3D reconstruction of synapses with deep learning based on EM Images
NASA Astrophysics Data System (ADS)
Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei
2017-03-01
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-08-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-01-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635
PEG Enhancement for EM1 and EM2+ Missions
NASA Technical Reports Server (NTRS)
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG algorithm is capable for use on the SLS Block 1-B vehicle as part of the Guidance, Navigation, and Control System.
Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C
2010-06-01
We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation. Copyright 2009 Elsevier Ltd. All rights reserved.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
Hoyng, Lieke L; Frings, Virginie; Hoekstra, Otto S; Kenny, Laura M; Aboagye, Eric O; Boellaard, Ronald
2015-01-01
Positron emission tomography (PET) with (18)F-3'-deoxy-3'-fluorothymidine ([(18)F]FLT) can be used to assess tumour proliferation. A kinetic-filtering (KF) classification algorithm has been suggested for segmentation of tumours in dynamic [(18)F]FLT PET data. The aim of the present study was to evaluate KF segmentation and its test-retest performance in [(18)F]FLT PET in non-small cell lung cancer (NSCLC) patients. Nine NSCLC patients underwent two 60-min dynamic [(18)F]FLT PET scans within 7 days prior to treatment. Dynamic scans were reconstructed with filtered back projection (FBP) as well as with ordered subsets expectation maximisation (OSEM). Twenty-eight lesions were identified by an experienced physician. Segmentation was performed using KF applied to the dynamic data set and a source-to-background corrected 50% threshold (A50%) was applied to the sum image of the last three frames (45- to 60-min p.i.). Furthermore, several adaptations of KF were tested. Both for KF and A50% test-retest (TRT) variability of metabolically active tumour volume and standard uptake value (SUV) were evaluated. KF performed better on OSEM- than on FBP-reconstructed PET images. The original KF implementation segmented 15 out of 28 lesions, whereas A50% segmented each lesion. Adapted KF versions, however, were able to segment 26 out of 28 lesions. In the best performing adapted versions, metabolically active tumour volume and SUV TRT variability was similar to those of A50%. KF misclassified certain tumour areas as vertebrae or liver tissue, which was shown to be related to heterogeneous [(18)F]FLT uptake areas within the tumour. For [(18)F]FLT PET studies in NSCLC patients, KF and A50% show comparable tumour volume segmentation performance. The KF method needs, however, a site-specific optimisation. The A50% is therefore a good alternative for tumour segmentation in NSCLC [(18)F]FLT PET studies in multicentre studies. Yet, it was observed that KF has the potential to subsegment lesions in high and low proliferative areas.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Leukocyte Recognition Using EM-Algorithm
NASA Astrophysics Data System (ADS)
Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.
This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
2013-01-01
intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram
Efficient sequential and parallel algorithms for finding edit distance based motifs.
Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar
2016-08-18
Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in this paper are also applicable to other motif search problems such as Planted Motif Search (PMS) and Simple Motif Search (SMS).
Viewing Angle Classification of Cryo-Electron Microscopy Images Using Eigenvectors
Singer, A.; Zhao, Z.; Shkolnisky, Y.; Hadani, R.
2012-01-01
The cryo-electron microscopy (cryo-EM) reconstruction problem is to find the three-dimensional structure of a macromolecule given noisy versions of its two-dimensional projection images at unknown random directions. We introduce a new algorithm for identifying noisy cryo-EM images of nearby viewing angles. This identification is an important first step in three-dimensional structure determination of macromolecules from cryo-EM, because once identified, these images can be rotationally aligned and averaged to produce “class averages” of better quality. The main advantage of our algorithm is its extreme robustness to noise. The algorithm is also very efficient in terms of running time and memory requirements, because it is based on the computation of the top few eigenvectors of a specially designed sparse Hermitian matrix. These advantages are demonstrated in numerous numerical experiments. PMID:22506089
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Deterministic annealing for density estimation by multivariate normal mixtures
NASA Astrophysics Data System (ADS)
Kloppenburg, Martin; Tavan, Paul
1997-03-01
An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.
Conformal Electromagnetic Particle in Cell: A Review
Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...
2015-10-26
We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.
Optimisation of Combined Cycle Gas Turbine Power Plant in Intraday Market: Riga CHP-2 Example
NASA Astrophysics Data System (ADS)
Ivanova, P.; Grebesh, E.; Linkevics, O.
2018-02-01
In the research, the influence of optimised combined cycle gas turbine unit - according to the previously developed EM & OM approach with its use in the intraday market - is evaluated on the generation portfolio. It consists of the two combined cycle gas turbine units. The introduced evaluation algorithm saves the power and heat balance before and after the performance of EM & OM approach by making changes in the generation profile of units. The aim of this algorithm is profit maximisation of the generation portfolio. The evaluation algorithm is implemented in multi-paradigm numerical computing environment MATLab on the example of Riga CHP-2. The results show that the use of EM & OM approach in the intraday market can be profitable or unprofitable. It depends on the initial state of generation units in the intraday market and on the content of the generation portfolio.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data
NASA Astrophysics Data System (ADS)
Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.
2013-01-01
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Processing of Cryo-EM Movie Data.
Ripstein, Z A; Rubinstein, J L
2016-01-01
Direct detector device (DDD) cameras dramatically enhance the capabilities of electron cryomicroscopy (cryo-EM) due to their improved detective quantum efficiency (DQE) relative to other detectors. DDDs use semiconductor technology that allows micrographs to be recorded as movies rather than integrated individual exposures. Movies from DDDs improve cryo-EM in another, more surprising, way. DDD movies revealed beam-induced specimen movement as a major source of image degradation and provide a way to partially correct the problem by aligning frames or regions of frames to account for this specimen movement. In this chapter, we use a self-consistent mathematical notation to explain, compare, and contrast several of the most popular existing algorithms for computationally correcting specimen movement in DDD movies. We conclude by discussing future developments in algorithms for processing DDD movies that would extend the capabilities of cryo-EM even further. © 2016 Elsevier Inc. All rights reserved.
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699
Beam-induced motion correction for sub-megadalton cryo-EM particles.
Scheres, Sjors Hw
2014-08-13
In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David F.
2016-07-06
Program EMRECORD is a utility program designed to facilitate introduction of a 3D electromagnetic (EM) data acquisition configuration (or a “source-receiver recording geometry”) into EM forward modeling algorithms EMHOLE and FDEM. A precise description of the locations (in 3D space), orientations, types, and amplitudes/sensitivities, of all sources and receivers is an essential ingredient for forward modeling of EM wavefields.
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
Effect of attenuation correction on image quality in emission tomography
NASA Astrophysics Data System (ADS)
Denisova, N. V.; Ondar, M. M.
2017-10-01
In this paper, mathematical modeling and computer simulations of myocardial perfusion SPECT imaging are performed. The main factors affecting the quality of reconstructed images in SPECT are anatomical structures, the diastolic volume of a myocardium and attenuation of gamma rays. The purpose of the present work is to study the effect of attenuation correction on image quality in emission tomography. The basic 2D model describing a Tc-99m distribution in a transaxial slice of the thoracic part of a patient body was designed. This model was used to construct four phantoms simulated various anatomical shapes: 2 male and 2 female patients with normal, obese and subtle physique were included in the study. Data acquisition model which includes the effect of non-uniform attenuation, collimator-detector response and Poisson statistics was developed. The projection data were calculated for 60 views in accordance with the standard myocardial perfusion SPECT imaging protocol. Reconstructions of images were performed using the OSEM algorithm which is widely used in modern SPECT systems. Two types of patient's examination procedures were simulated: SPECT without attenuation correction and SPECT/CT with attenuation correction. The obtained results indicate a significant effect of the attenuation correction on the SPECT images quality.
Rucher, Guillaume; Cameliere, Lucie; Fendri, Jihene; Abbas, Ahmed; Dupont, Kevin; Kamel, Said; Delcroix, Nicolas; Dupont, Axel; Berger, Ludovic; Manrique, Alain
2018-04-30
The purpose of this study was to assess the impact of positron emission tomography/X-ray computed tomography (PET/CT) acquisition and reconstruction parameters on the assessment of mineralization process in a mouse model of atherosclerosis. All experiments were performed on a dedicated preclinical PET/CT system. CT was evaluated using five acquisition configurations using both a tungsten wire phantom for in-plane resolution assessment and a bar pattern phantom for cross-plane resolution. Furthermore, the radiation dose of these acquisition configurations was calculated. The PET system was assessed using longitudinal line sources to determine the optimal reconstruction parameters by measuring central resolution and its coefficient of variation. An in vivo PET study was performed using uremic ApoE -/- , non-uremic ApoE -/- , and control mice to evaluate optimal PET reconstruction parameters for the detection of sodium [ 18 F]fluoride (Na[ 18 F]F) aortic uptake and for quantitative measurement of Na[ 18 F]F bone influx (Ki) with a Patlak analysis. For CT, the use of 1 × 1 and 2 × 2 binning detector mode increased both in-plane and cross-plane resolution. However, resolution improvement (163 to 62 μm for in-plane resolution) was associated with an important radiation dose increase (1.67 to 32.78 Gy). With PET, 3D-ordered subset expectation maximization (3D-OSEM) algorithm increased the central resolution compared to filtered back projection (1.42 ± 0.35 mm vs. 1.91 ± 0.08, p < 0.001). The use of 3D-OSEM with eight iterations and a zoom factor 2 yielded optimal PET resolution for preclinical study (FWHM = 0.98 mm). These PET reconstruction parameters allowed the detection of Na[ 18 F]F aortic uptake in 3/14 ApoE -/- mice and demonstrated a decreased Ki in uremic ApoE -/- compared to non-uremic ApoE -/- and control mice (p < 0.006). Optimizing reconstruction parameters significantly impacted on the assessment of mineralization process in a preclinical model of accelerated atherosclerosis using Na[ 18 F]F PET. In addition, improving the CT resolution was associated with a dramatic radiation dose increase.
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
Razifar, Pasha; Sandström, Mattias; Schnieder, Harald; Långström, Bengt; Maripuu, Enn; Bengtsson, Ewert; Bergström, Mats
2005-08-25
Positron Emission Tomography (PET), Computed Tomography (CT), PET/CT and Single Photon Emission Tomography (SPECT) are non-invasive imaging tools used for creating two dimensional (2D) cross section images of three dimensional (3D) objects. PET and SPECT have the potential of providing functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules, whereas CT visualizes X-ray density in tissues in the body. PET/CT provides fused images representing both functional and anatomical information with better precision in localization than PET alone. Images generated by these types of techniques are generally noisy, thereby impairing the imaging potential and affecting the precision in quantitative values derived from the images. It is crucial to explore and understand the properties of noise in these imaging techniques. Here we used autocorrelation function (ACF) specifically to describe noise correlation and its non-isotropic behaviour in experimentally generated images of PET, CT, PET/CT and SPECT. Experiments were performed using phantoms with different shapes. In PET and PET/CT studies, data were acquired in 2D acquisition mode and reconstructed by both analytical filter back projection (FBP) and iterative, ordered subsets expectation maximisation (OSEM) methods. In the PET/CT studies, different magnitudes of X-ray dose in the transmission were employed by using different mA settings for the X-ray tube. In the CT studies, data were acquired using different slice thickness with and without applied dose reduction function and the images were reconstructed by FBP. SPECT studies were performed in 2D, reconstructed using FBP and OSEM, using post 3D filtering. ACF images were generated from the primary images, and profiles across the ACF images were used to describe the noise correlation in different directions. The variance of noise across the images was visualised as images and with profiles across these images. The most important finding was that the pattern of noise correlation is rotation symmetric or isotropic, independent of object shape in PET and PET/CT images reconstructed using the iterative method. This is, however, not the case in FBP images when the shape of phantom is not circular. Also CT images reconstructed using FBP show the same non-isotropic pattern independent of slice thickness and utilization of care dose function. SPECT images show an isotropic correlation of the noise independent of object shape or applied reconstruction algorithm. Noise in PET/CT images was identical independent of the applied X-ray dose in the transmission part (CT), indicating that the noise from transmission with the applied doses does not propagate into the PET images showing that the noise from the emission part is dominant. The results indicate that in human studies it is possible to utilize a low dose in transmission part while maintaining the noise behaviour and the quality of the images. The combined effect of noise correlation for asymmetric objects and a varying noise variance across the image field significantly complicates the interpretation of the images when statistical methods are used, such as with statistical estimates of precision in average values, use of statistical parametric mapping methods and principal component analysis. Hence it is recommended that iterative reconstruction methods are used for such applications. However, it is possible to calculate the noise analytically in images reconstructed by FBP, while it is not possible to do the same calculation in images reconstructed by iterative methods. Therefore for performing statistical methods of analysis which depend on knowing the noise, FBP would be preferred.
Cosmic muon induced EM showers in NO$$\
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter; ...
2016-11-15
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
Cosmic muon induced EM showers in NO$$\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
Sasaki, Satoshi; Comber, Alexis J; Suzuki, Hiroshi; Brunsdon, Chris
2010-01-28
Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.
Dozois, Adeline; Hampton, Lorrie; Kingston, Carlene W; Lambert, Gwen; Porcelli, Thomas J; Sorenson, Denise; Templin, Megan; VonCannon, Shellie; Asimos, Andrew W
2017-12-01
The recently proposed American Heart Association/American Stroke Association EMS triage algorithm endorses routing patients with suspected large vessel occlusion (LVO) acute ischemic strokes directly to endovascular centers based on a stroke severity score. The predictive value of this algorithm for identifying LVO is dependent on the overall prevalence of LVO acute ischemic stroke in the EMS population screened for stroke, which has not been reported. We performed a cross-sectional study of patients transported by our county's EMS agency who were dispatched as a possible stroke or had a primary impression of stroke by paramedics. We determined the prevalence of LVO by reviewing medical record imaging reports based on a priori specified criteria. We enrolled 2402 patients, of whom 777 (32.3%) had an acute stroke-related diagnosis. Among 485 patients with acute ischemic stroke, 24.1% (n=117) had an LVO, which represented only 4.87% (95% confidence interval, 4.05%-5.81%) of the total EMS population screened for stroke. Overall, the prevalence of LVO acute ischemic stroke in our EMS population screened for stroke was low. This is an important consideration for any EMS stroke severity-based triage protocol and should be considered in predicting the rates of overtriage to endovascular stroke centers. © 2017 American Heart Association, Inc.
Casu, Sebastian; Häske, David
2016-06-01
Delayed antibiotic treatment for patients in severe sepsis and septic shock decreases the probability of survival. In this survey, medical directors of different emergency medical services (EMS) in Germany were asked if they are prepared for pre-hospital sepsis therapy with antibiotics or special algorithms to evaluate the individual preparations of the different rescue areas for the treatment of patients with this infectious disease. The objective of the survey was to obtain a general picture of the current status of the EMS with respect to rapid antibiotic treatment for sepsis. A total of 166 medical directors were invited to complete a short survey on behalf of the different rescue service districts in Germany via an electronic cover letter. Of the rescue districts, 25.6 % (n = 20) stated that they keep antibiotics on EMS vehicles. In addition, 2.6 % carry blood cultures on the vehicles. The most common antibiotic is ceftriaxone (third generation cephalosporin). In total, 8 (10.3 %) rescue districts use an algorithm for patients with sepsis, severe sepsis or septic shock. Although the German EMS is an emergency physician-based rescue system, special opportunities in the form of antibiotics on emergency physician vehicles are missing. Simultaneously, only 10.3 % of the rescue districts use a special algorithm for sepsis therapy. Sepsis, severe sepsis and septic shock do not appear to be prioritized as highly as these deadly diseases should be in the pre-hospital setting.
A synergistic method for vibration suppression of an elevator mechatronic system
NASA Astrophysics Data System (ADS)
Knezevic, Bojan Z.; Blanusa, Branko; Marcetic, Darko P.
2017-10-01
Modern elevators are complex mechatronic systems which have to satisfy high performance in precision, safety and ride comfort. Each elevator mechatronic system (EMS) contains a mechanical subsystem which is characterized by its resonant frequency. In order to achieve high performance of the whole system, the control part of the EMS inevitably excites resonant circuits causing the occurrence of vibration. This paper proposes a synergistic solution based on the jerk control and the upgrade of the speed controller with a band-stop filter to restore lost ride comfort and speed control caused by vibration. The band-stop filter eliminates the resonant component from the speed controller spectra and jerk control provides operating of the speed controller in a linear mode as well as increased ride comfort. The original method for band-stop filter tuning based on Goertzel algorithm and Kiefer search algorithm is proposed in this paper. In order to generate the speed reference trajectory which can be defined by different shapes and amplitudes of jerk, a unique generalized model is proposed. The proposed algorithm is integrated in the power drive control algorithm and implemented on the digital signal processor. Through experimental verifications on a scale down prototype of the EMS it has been verified that only synergistic effect of controlling jerk and filtrating the reference torque can completely eliminate vibrations.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
NASA Astrophysics Data System (ADS)
Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.
2017-08-01
A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.
EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice
NASA Astrophysics Data System (ADS)
Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.
2016-12-01
The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.
NASA Astrophysics Data System (ADS)
Ackley, Kendall; Eikenberry, Stephen; Klimenko, Sergey; LIGO Team
2017-01-01
We present a false-alarm rate for a joint detection of gravitational wave (GW) events and associated electromagnetic (EM) counterparts for Advanced LIGO and Virgo (LV) observations during the first years of operation. Using simulated GW events and their recostructed probability skymaps, we tile over the error regions using sets of archival wide-field telescope survey images and recover the number of astrophysical transients to be expected during LV-EM followup. With the known GW event injection coordinates we inject artificial electromagnetic (EM) sources at that site based on theoretical and observational models on a one-to-one basis. We calculate the EM false-alarm probability using an unsupervised machine learning algorithm based on shapelet analysis which has shown to be a strong discriminator between astrophysical transients and image artifacts while reducing the set of transients to be manually vetted by five orders of magnitude. We also show the performance of our method in context with other machine-learned transient classification and reduction algorithms, showing comparability without the need for a large set of training data opening the possibility for next-generation telescopes to take advantage of this pipeline for LV-EM followup missions.
Recursive Fact-finding: A Streaming Approach to Truth Estimation in Crowdsourcing Applications
2013-07-01
are reported over the course of the campaign, lending themselves better to the abstraction of a data stream arriving from the community of sources. In...EM Recursive EM Figure 4. Recursive EM Algorithm Convergence V. RELATED WORK Social sensing which is also referred to as human- centric sensing [4...systems, where different sources offer reviews on products (or brands, companies) they have experienced [16]. Customers are affected by those reviews
NASA Astrophysics Data System (ADS)
Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu
2014-01-01
Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
Orthogonalizing EM: A design-based least squares algorithm.
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
Extracellular space preservation aids the connectomic analysis of neural circuits.
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-12-09
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.
Deep learning and model predictive control for self-tuning mode-locked lasers
NASA Astrophysics Data System (ADS)
Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.
2018-03-01
Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
Identification of the focal plane wavefront control system using E-M algorithm
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert
2017-09-01
In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
Gaitanis, Anastasios; Kastis, George A; Vlastou, Elena; Bouziotis, Penelope; Verginis, Panayotis; Anagnostopoulos, Constantinos D
2017-08-01
The Tera-Tomo 3D image reconstruction algorithm (a version of OSEM), provided with the Mediso nanoScan® PC (PET8/2) small-animal positron emission tomograph (PET)/x-ray computed tomography (CT) scanner, has various parameter options such as total level of regularization, subsets, and iterations. Also, the acquisition time in PET plays an important role. This study aims to assess the performance of this new small-animal PET/CT scanner for different acquisition times and reconstruction parameters, for 2-deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) and Ga-68, under the NEMA NU 4-2008 standards. Various image quality metrics were calculated for different realizations of [ 18 F]FDG and Ga-68 filled image quality (IQ) phantoms. [ 18 F]FDG imaging produced improved images over Ga-68. The best compromise for the optimization of all image quality factors is achieved for at least 30 min acquisition and image reconstruction with 52 iteration updates combined with a high regularization level. A high regularization level at 52 iteration updates and 30 min acquisition time were found to optimize most of the figures of merit investigated.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
Numerical simulations of imaging satellites with optical interferometry
NASA Astrophysics Data System (ADS)
Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong
2015-08-01
Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru
2018-06-01
This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.
Superhuman AI for heads-up no-limit poker: Libratus beats top professionals.
Brown, Noam; Sandholm, Tuomas
2018-01-26
No-limit Texas hold'em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold'em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy. Copyright © 2018, The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
NASA Astrophysics Data System (ADS)
Zhou, D. F.; Li, J.; Hansen, C. H.
2011-11-01
Track-induced self-excited vibration is commonly encountered in EMS (electromagnetic suspension) maglev systems, and a solution to this problem is important in enabling the commercial widespread implementation of maglev systems. Here, the coupled model of the steel track and the magnetic levitation system is developed, and its stability is investigated using the Nyquist criterion. The harmonic balance method is employed to investigate the stability and amplitude of the self-excited vibration, which provides an explanation of the phenomenon that track-induced self-excited vibration generally occurs at a specified amplitude and frequency. To eliminate the self-excited vibration, an improved LMS (Least Mean Square) cancellation algorithm with phase correction (C-LMS) is employed. The harmonic balance analysis shows that the C-LMS cancellation algorithm can completely suppress the self-excited vibration. To achieve adaptive cancellation, a frequency estimator similar to the tuner of a TV receiver is employed to provide the C-LMS algorithm with a roughly estimated reference frequency. Numerical simulation and experiments undertaken on the CMS-04 vehicle show that the proposed adaptive C-LMS algorithm can effectively eliminate the self-excited vibration over a wide frequency range, and that the robustness of the algorithm suggests excellent potential for application to EMS maglev systems.
NASA Astrophysics Data System (ADS)
Tornai, Martin P.; Bowsher, James E.; Archer, Caryl N.; Peter, Jörg; Jaszczak, Ronald J.; MacDonald, Lawrence R.; Patt, Bradley E.; Iwanczyk, Jan S.
2003-01-01
A novel tomographic gantry was designed, built and initially evaluated for single photon emission imaging of metabolically active lesions in the pendant breast and near chest wall. Initial emission imaging measurements with breast lesions of various uptake ratios are presented. Methods: A prototype tomograph was constructed utilizing a compact gamma camera having a field-of-view of <13×13 cm 2 with arrays of 2×2×6 mm 3 quantized NaI(Tl) scintillators coupled to position sensitive PMTs. The camera was mounted on a radially oriented support with 6 cm variable radius-of-rotation. This unit is further mounted on a goniometric cradle providing polar motion, and in turn mounted on an azimuthal rotation stage capable of indefinite vertical axis-of-rotation about the central rotation axis (RA). Initial measurements with isotopic Tc-99 m (140 keV) to evaluate the system include acquisitions with various polar tilt angles about the RA. Tomographic measurements were made of a frequency and resolution cold-rod phantom filled with aqueous Tc-99 m. Tomographic and planar measurements of 0.6 and 1.0 cm diameter fillable spheres in an available ˜950 ml hemi-ellipsoidal (uncompressed) breast phantom attached to a life-size anthropomorphic torso phantom with lesion:breast-and-body:cardiac-and-liver activity concentration ratios of 11:1:19 were compared. Various photopeak energy windows from 10-30% widths were obtained, along with a 35% scatter window below a 15% photopeak window from the list mode data. Projections with all photopeak window and camera tilt conditions were reconstructed with an ordered subsets expectation maximization (OSEM) algorithm capable of reconstructing arbitrary tomographic orbits. Results: As iteration number increased for the tomographically measured data at all polar angles, contrasts increased while signal-to-noise ratios (SNRs) decreased in the expected way with OSEM reconstruction. The rollover between contrast improvement and SNR degradation of the lesion occurred at two to three iterations. The reconstructed tomographic data yielded SNRs with or without scatter correction that were >9 times better than the planar scans. There was up to a factor of ˜2.5 increase in total primary and scatter contamination in the photopeak window with increasing tilt angle from 15° to 45°, consistent with more direct line-of-sight of myocardial and liver activity with increased camera polar angle. Conclusion: This new, ultra-compact, dedicated tomographic imaging system has the potential of providing valuable, fully 3D functional information about small, otherwise indeterminate breast lesions as an adjunct to diagnostic mammography.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Bowen, Jason D; Huang, Qiu; Ellin, Justin R; Lee, Tzu-Cheng; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2013-10-21
Single photon emission computed tomography (SPECT) myocardial perfusion imaging remains a critical tool in the diagnosis of coronary artery disease. However, after more than three decades of use, photon detection efficiency remains poor and unchanged. This is due to the continued reliance on parallel-hole collimators first introduced in 1964. These collimators possess poor geometric efficiency. Here we present the performance evaluation results of a newly designed multipinhole collimator with 20 pinhole apertures (PH20) for commercial SPECT systems. Computer simulations and numerical observer studies were used to assess the noise, bias and diagnostic imaging performance of a PH20 collimator in comparison with those of a low energy high resolution (LEHR) parallel-hole collimator. Ray-driven projector/backprojector pairs were used to model SPECT imaging acquisitions, including simulation of noiseless projection data and performing MLEM/OSEM image reconstructions. Poisson noise was added to noiseless projections for realistic projection data. Noise and bias performance were investigated for five mathematical cardiac and torso (MCAT) phantom anatomies imaged at two gantry orbit positions (19.5 and 25.0 cm). PH20 and LEHR images were reconstructed with 300 MLEM iterations and 30 OSEM iterations (ten subsets), respectively. Diagnostic imaging performance was assessed by a receiver operating characteristic (ROC) analysis performed on a single MCAT phantom; however, in this case PH20 images were reconstructed with 75 pixel-based OSEM iterations (four subsets). Four PH20 projection views from two positions of a dual-head camera acquisition and 60 LEHR projections were simulated for all studies. At uniformly-imposed resolution of 12.5 mm, significant improvements in SNR and diagnostic sensitivity (represented by the area under the ROC curve, or AUC) were realized when PH20 collimators are substituted for LEHR parallel-hole collimators. SNR improves by factors of 1.94-2.34 for the five patient anatomies and two orbital positions studied. For the ROC analysis the PH20 AUC is larger than the LEHR AUC with a p-value of 0.0067. Bias performance, however, decreases with the use of PH20 collimators. Systematic analyses showed PH20 collimators present improved diagnostic imaging performance over LEHR collimators, requiring only collimator exchange on existing SPECT cameras for their use.
NASA Astrophysics Data System (ADS)
Bowen, Jason D.; Huang, Qiu; Ellin, Justin R.; Lee, Tzu-Cheng; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2013-10-01
Single photon emission computed tomography (SPECT) myocardial perfusion imaging remains a critical tool in the diagnosis of coronary artery disease. However, after more than three decades of use, photon detection efficiency remains poor and unchanged. This is due to the continued reliance on parallel-hole collimators first introduced in 1964. These collimators possess poor geometric efficiency. Here we present the performance evaluation results of a newly designed multipinhole collimator with 20 pinhole apertures (PH20) for commercial SPECT systems. Computer simulations and numerical observer studies were used to assess the noise, bias and diagnostic imaging performance of a PH20 collimator in comparison with those of a low energy high resolution (LEHR) parallel-hole collimator. Ray-driven projector/backprojector pairs were used to model SPECT imaging acquisitions, including simulation of noiseless projection data and performing MLEM/OSEM image reconstructions. Poisson noise was added to noiseless projections for realistic projection data. Noise and bias performance were investigated for five mathematical cardiac and torso (MCAT) phantom anatomies imaged at two gantry orbit positions (19.5 and 25.0 cm). PH20 and LEHR images were reconstructed with 300 MLEM iterations and 30 OSEM iterations (ten subsets), respectively. Diagnostic imaging performance was assessed by a receiver operating characteristic (ROC) analysis performed on a single MCAT phantom; however, in this case PH20 images were reconstructed with 75 pixel-based OSEM iterations (four subsets). Four PH20 projection views from two positions of a dual-head camera acquisition and 60 LEHR projections were simulated for all studies. At uniformly-imposed resolution of 12.5 mm, significant improvements in SNR and diagnostic sensitivity (represented by the area under the ROC curve, or AUC) were realized when PH20 collimators are substituted for LEHR parallel-hole collimators. SNR improves by factors of 1.94-2.34 for the five patient anatomies and two orbital positions studied. For the ROC analysis the PH20 AUC is larger than the LEHR AUC with a p-value of 0.0067. Bias performance, however, decreases with the use of PH20 collimators. Systematic analyses showed PH20 collimators present improved diagnostic imaging performance over LEHR collimators, requiring only collimator exchange on existing SPECT cameras for their use.
HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.
Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D
2002-08-07
Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.
Orthogonalizing EM: A design-based least squares algorithm
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.
2016-01-01
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558
STEME: A Robust, Accurate Motif Finder for Large Data Sets
Reid, John E.; Wernisch, Lorenz
2014-01-01
Motif finding is a difficult problem that has been studied for over 20 years. Some older popular motif finders are not suitable for analysis of the large data sets generated by next-generation sequencing. We recently published an efficient approximation (STEME) to the EM algorithm that is at the core of many motif finders such as MEME. This approximation allows the EM algorithm to be applied to large data sets. In this work we describe several efficient extensions to STEME that are based on the MEME algorithm. Together with the original STEME EM approximation, these extensions make STEME a fully-fledged motif finder with similar properties to MEME. We discuss the difficulty of objectively comparing motif finders. We show that STEME performs comparably to existing prominent discriminative motif finders, DREME and Trawler, on 13 sets of transcription factor binding data in mouse ES cells. We demonstrate the ability of STEME to find long degenerate motifs which these discriminative motif finders do not find. As part of our method, we extend an earlier method due to Nagarajan et al. for the efficient calculation of motif E-values. STEME's source code is available under an open source license and STEME is available via a web interface. PMID:24625410
Extracellular space preservation aids the connectomic analysis of neural circuits
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-01-01
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits. DOI: http://dx.doi.org/10.7554/eLife.08206.001 PMID:26650352
Crowdsourcing the creation of image segmentation algorithms for connectomics.
Arganda-Carreras, Ignacio; Turaga, Srinivas C; Berger, Daniel R; Cireşan, Dan; Giusti, Alessandro; Gambardella, Luca M; Schmidhuber, Jürgen; Laptev, Dmitry; Dwivedi, Sarvesh; Buhmann, Joachim M; Liu, Ting; Seyedhosseini, Mojtaba; Tasdizen, Tolga; Kamentsky, Lee; Burget, Radim; Uher, Vaclav; Tan, Xiao; Sun, Changming; Pham, Tuan D; Bas, Erhan; Uzunbas, Mustafa G; Cardona, Albert; Schindelin, Johannes; Seung, H Sebastian
2015-01-01
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.
Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans
2018-01-01
Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.
DiMaio, F; Chiu, W
2016-01-01
Electron cryo-microscopy (cryoEM) has advanced dramatically to become a viable tool for high-resolution structural biology research. The ultimate outcome of a cryoEM study is an atomic model of a macromolecule or its complex with interacting partners. This chapter describes a variety of algorithms and software to build a de novo model based on the cryoEM 3D density map, to optimize the model with the best stereochemistry restraints and finally to validate the model with proper protocols. The full process of atomic structure determination from a cryoEM map is described. The tools outlined in this chapter should prove extremely valuable in revealing atomic interactions guided by cryoEM data. © 2016 Elsevier Inc. All rights reserved.
A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockway, J D; Champagne, N J; Sharpe, R M
2004-01-14
Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-loadmore » circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.« less
Afanasyev, Pavel; Seer-Linnemayr, Charlotte; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V; Pannu, Navraj S; Schatz, Michael; van Heel, Marin
2017-09-01
Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the 'Einstein from random noise' problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous ('four-dimensional') cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, 'random-startup' three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external 'starting models'. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive 'ABC-4D' pipeline is based on the two-dimensional reference-free 'alignment by classification' (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Counting malaria parasites with a two-stage EM based algorithm using crowsourced data.
Cabrera-Bean, Margarita; Pages-Zamora, Alba; Diaz-Vilor, Carles; Postigo-Camps, Maria; Cuadrado-Sanchez, Daniel; Luengo-Oroz, Miguel Angel
2017-07-01
Malaria eradication of the worldwide is currently one of the main WHO's global goals. In this work, we focus on the use of human-machine interaction strategies for low-cost fast reliable malaria diagnostic based on a crowdsourced approach. The addressed technical problem consists in detecting spots in images even under very harsh conditions when positive objects are very similar to some artifacts. The clicks or tags delivered by several annotators labeling an image are modeled as a robust finite mixture, and techniques based on the Expectation-Maximization (EM) algorithm are proposed for accurately counting malaria parasites on thick blood smears obtained by microscopic Giemsa-stained techniques. This approach outperforms other traditional methods as it is shown through experimentation with real data.
Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J.; Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago
2016-07-07
Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work,more » we present the unfolding results using the EM algorithm.« less
Encke-Beta Predictor for Orion Burn Targeting and Guidance
NASA Technical Reports Server (NTRS)
Robinson, Shane; Scarritt, Sara; Goodman, John L.
2016-01-01
The state vector prediction algorithm selected for Orion on-board targeting and guidance is known as the Encke-Beta method. Encke-Beta uses a universal anomaly (beta) as the independent variable, valid for circular, elliptical, parabolic, and hyperbolic orbits. The variable, related to the change in eccentric anomaly, results in integration steps that cover smaller arcs of the trajectory at or near perigee, when velocity is higher. Some burns in the EM-1 and EM-2 mission plans are much longer than burns executed with the Apollo and Space Shuttle vehicles. Burn length, as well as hyperbolic trajectories, has driven the use of the Encke-Beta numerical predictor by the predictor/corrector guidance algorithm in place of legacy analytic thrust and gravity integrals.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Alignment of cryo-EM movies of individual particles by optimization of image translations.
Rubinstein, John L; Brubaker, Marcus A
2015-11-01
Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
Visualizing the global secondary structure of a viral RNA genome with cryo-electron microscopy
Garmann, Rees F.; Gopal, Ajaykumar; Athavale, Shreyas S.; Knobler, Charles M.; Gelbart, William M.; Harvey, Stephen C.
2015-01-01
The lifecycle, and therefore the virulence, of single-stranded (ss)-RNA viruses is regulated not only by their particular protein gene products, but also by the secondary and tertiary structure of their genomes. The secondary structure of the entire genomic RNA of satellite tobacco mosaic virus (STMV) was recently determined by selective 2′-hydroxyl acylation analyzed by primer extension (SHAPE). The SHAPE analysis suggested a single highly extended secondary structure with much less branching than occurs in the ensemble of structures predicted by purely thermodynamic algorithms. Here we examine the solution-equilibrated STMV genome by direct visualization with cryo-electron microscopy (cryo-EM), using an RNA of similar length transcribed from the yeast genome as a control. The cryo-EM data reveal an ensemble of branching patterns that are collectively consistent with the SHAPE-derived secondary structure model. Thus, our results both elucidate the statistical nature of the secondary structure of large ss-RNAs and give visual support for modern RNA structure determination methods. Additionally, this work introduces cryo-EM as a means to distinguish between competing secondary structure models if the models differ significantly in terms of the number and/or length of branches. Furthermore, with the latest advances in cryo-EM technology, we suggest the possibility of developing methods that incorporate restraints from cryo-EM into the next generation of algorithms for the determination of RNA secondary and tertiary structures. PMID:25752599
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
NASA Astrophysics Data System (ADS)
Kalantari, Faraz; Sen, Anando; Gifford, Howard C.
2014-03-01
SPECT imaging using In-111 ProstaScint is an FDA-approved method for diagnosing prostate cancer metastases within the pelvis. However, conventional medium-energy parallel-hole (MEPAR) collimators produce poor image quality and we are investigating the use of multipinhole (MPH) imaging as an alternative. This paper presents a method for evaluating MPH designs that makes use of sampling-sensitive (SS) mathematical model observers for tumor detectionlocalization tasks. Key to our approach is the redefinition of a normal (or background) reference image that is used with scanning model observers. We used this approach to compare different MPH configurations for the task of small-tumor detection in the prostate and surrounding lymph nodes. Four configurations used 10, 20, 30, and 60 pinholes evenly spaced over a complete circular orbit. A fixed-count acquisition protocol was assumed. Spherical tumors were placed within a digital anthropomorphic phantom having a realistic Prostascint biodistribution. Imaging data sets were generated with an analytical projector and reconstructed volumes were obtained with the OSEM algorithm. The MPH configurations were compared in a localization ROC (LROC) study with 2D pelvic images and both human and model observers. Regular and SS versions of the scanning channelized nonprewhitening (CNPW) and visual-search (VS) model observers were applied. The SS models demonstrated the highest correlations with the average human-observer results
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2013-01-01
In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2014-01-01
In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…
When Gravity Fails: Local Search Topology
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Cheeseman, Peter; Stutz, John; Lau, Sonie (Technical Monitor)
1997-01-01
Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called {\\em plateau moves), dominate the time spent in local search. We analyze and characterize {\\em plateaus) for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e. global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.
On the Latent Variable Interpretation in Sum-Product Networks.
Peharz, Robert; Gens, Robert; Pernkopf, Franz; Domingos, Pedro
2017-10-01
One of the central themes in Sum-Product networks (SPNs) is the interpretation of sum nodes as marginalized latent variables (LVs). This interpretation yields an increased syntactic or semantic structure, allows the application of the EM algorithm and to efficiently perform MPE inference. In literature, the LV interpretation was justified by explicitly introducing the indicator variables corresponding to the LVs' states. However, as pointed out in this paper, this approach is in conflict with the completeness condition in SPNs and does not fully specify the probabilistic model. We propose a remedy for this problem by modifying the original approach for introducing the LVs, which we call SPN augmentation. We discuss conditional independencies in augmented SPNs, formally establish the probabilistic interpretation of the sum-weights and give an interpretation of augmented SPNs as Bayesian networks. Based on these results, we find a sound derivation of the EM algorithm for SPNs. Furthermore, the Viterbi-style algorithm for MPE proposed in literature was never proven to be correct. We show that this is indeed a correct algorithm, when applied to selective SPNs, and in particular when applied to augmented SPNs. Our theoretical results are confirmed in experiments on synthetic data and 103 real-world datasets.
Hobo, T; Asada, M; Kowyama, Y; Hattori, T
1999-09-01
ACGT-containing ABA response elements (ABREs) have been functionally identified in the promoters of various genes. In addition, single copies of ABRE have been found to require a cis-acting, coupling element to achieve ABA induction. A coupling element 3 (CE3) sequence, originally identified as such in the barley HVA1 promoter, is found approximately 30 bp downstream of motif A (ACGT-containing ABRE) in the promoter of the Osem gene. The relationship between these two elements was further defined by linker-scan analyses of a 55 bp fragment of the Osem promoter, which is sufficient for ABA-responsiveness and VP1 activation. The analyses revealed that both motif A and CE3 sequence were required not only for ABA-responsiveness but also for VP1 activation. Since the sequences of motif A and CE3 were found to be similar, motif-exchange experiments were carried out. The experiments demonstrated that motif A and CE3 were interchangeable by each other with respect to both ABA and VP1 regulation. In addition, both sequences were shown to be recognized by a VP1-interacting, ABA-responsive bZIP factor TRAB1. These results indicate that ACGT-containing ABREs and CE3 are functionally equivalent cis-acting elements. Furthermore, TRAB1 was shown to bind two other non-ACGT ABREs. Based on these results, all these ABREs including CE3 are proposed to be categorized into a single class of cis-acting elements.
Seer-Linnemayr, Charlotte; Ravelli, Raimond B. G.; Matadeen, Rishi; De Carlo, Sacha; Alewijnse, Bart; Portugal, Rodrigo V.; Pannu, Navraj S.; Schatz, Michael; van Heel, Marin
2017-01-01
Single-particle cryogenic electron microscopy (cryo-EM) can now yield near-atomic resolution structures of biological complexes. However, the reference-based alignment algorithms commonly used in cryo-EM suffer from reference bias, limiting their applicability (also known as the ‘Einstein from random noise’ problem). Low-dose cryo-EM therefore requires robust and objective approaches to reveal the structural information contained in the extremely noisy data, especially when dealing with small structures. A reference-free pipeline is presented for obtaining near-atomic resolution three-dimensional reconstructions from heterogeneous (‘four-dimensional’) cryo-EM data sets. The methodologies integrated in this pipeline include a posteriori camera correction, movie-based full-data-set contrast transfer function determination, movie-alignment algorithms, (Fourier-space) multivariate statistical data compression and unsupervised classification, ‘random-startup’ three-dimensional reconstructions, four-dimensional structural refinements and Fourier shell correlation criteria for evaluating anisotropic resolution. The procedures exclusively use information emerging from the data set itself, without external ‘starting models’. Euler-angle assignments are performed by angular reconstitution rather than by the inherently slower projection-matching approaches. The comprehensive ‘ABC-4D’ pipeline is based on the two-dimensional reference-free ‘alignment by classification’ (ABC) approach, where similar images in similar orientations are grouped by unsupervised classification. Some fundamental differences between X-ray crystallography versus single-particle cryo-EM data collection and data processing are discussed. The structure of the giant haemoglobin from Lumbricus terrestris at a global resolution of ∼3.8 Å is presented as an example of the use of the ABC-4D procedure. PMID:28989723
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Implementation of the EM Algorithm in the Estimation of Item Parameters: The BILOG Computer Program.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Bock, R. Darrell
This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…
Gaussian-input Gaussian mixture model for representing density maps and atomic models.
Kawabata, Takeshi
2018-07-01
A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Cryo-EM visualization of the protein machine that replicates the chromosome
NASA Astrophysics Data System (ADS)
Li, Huilin
Structural knowledge is key to understanding biological functions. Cryo-EM is a physical method that uses transmission electron microscopy to visualize biological molecules that are frozen in vitreous ice. Due to recent advances in direct electron detector and image processing algorithm, cryo-EM has become a high-resolution technique. Cryo-EM field is undergoing a rapid expansion and vast majority research institutions and research universities around the world are setting up cryo-EM research. Indeed, the method is revolutionizing structural and molecular biology. We have been using cryo-EM to study the structure and mechanism of eukaryotic chromosome replication. Despite an abundance of cartoon drawings found in review articles and biology textbooks, the structure of the eukaryotic helicase that unwinds the double stranded DNA has been unknown. It has also been unknown how the helicase works with DNA polymerases to accomplish the feat of duplicating the genome. In my presentation, I will show how we have used cryo-EM to derive at structures of the eukaryotic chromosome replication machinery and describe mechanistic insights we have gleaned from the structures.
NASA Astrophysics Data System (ADS)
He, Xin; Links, Jonathan M.; Frey, Eric C.
2010-09-01
Quantum noise as well as anatomic and uptake variability in patient populations limits observer performance on a defect detection task in myocardial perfusion SPECT (MPS). The goal of this study was to investigate the relative importance of these two effects by varying acquisition time, which determines the count level, and assessing the change in performance on a myocardial perfusion (MP) defect detection task using both mathematical and human observers. We generated ten sets of projections of a simulated patient population with count levels ranging from 1/128 to around 15 times a typical clinical count level to simulate different levels of quantum noise. For the simulated population we modeled variations in patient, heart and defect size, heart orientation and shape, defect location, organ uptake ratio, etc. The projection data were reconstructed using the OS-EM algorithm with no compensation or with attenuation, detector response and scatter compensation (ADS). The images were then post-filtered and reoriented to generate short-axis slices. A channelized Hotelling observer (CHO) was applied to the short-axis images, and the area under the receiver operating characteristics (ROC) curve (AUC) was computed. For each noise level and reconstruction method, we optimized the number of iterations and cutoff frequencies of the Butterworth filter to maximize the AUC. Using the images obtained with the optimal iteration and cutoff frequency and ADS compensation, we performed human observer studies for four count levels to validate the CHO results. Both CHO and human observer studies demonstrated that observer performance was dependent on the relative magnitude of the quantum noise and the patient variation. When the count level was high, the patient variation dominated, and the AUC increased very slowly with changes in the count level for the same level of anatomic variability. When the count level was low, however, quantum noise dominated, and changes in the count level resulted in large changes in the AUC. This behavior agreed with a theoretical expression for the AUC as a function of quantum and anatomical noise levels. The results of this study demonstrate the importance of the tradeoff between anatomical and quantum noise in determining observer performance. For myocardial perfusion imaging, it indicates that, at current clinical count levels, there is some room to reduce acquisition time or injected activity without substantially degrading performance on myocardial perfusion defect detection.
Memetic algorithms for de novo motif-finding in biomedical sequences.
Bi, Chengpeng
2012-09-01
The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
1981-06-01
going on with the MITRE Corporation taking a look at the methods of providing ATARS ser- vice to aircraft in the traffic pattern. And we’re using some...SRDS ... "DABS, BCAS and ATARS " 31 QUESTIONS AND ANSWERS . ... ........................ 37 SPEAKERS Andres Zellweger, FAA, OSEM ... Replacement...either EFR or normal IFR procedures is provided either by the DABS/ ATARS operating in a traffic separation rather than collision avoidance mode or by
Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.
Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P
2016-12-01
Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively. Regularization with smoothing priors could suppress these noise patterns at the cost of reduced image contrast. The mean N% was 6.4% and 6.8% for low count QSP and MRP MAP reconstructed images. Alternatively, regularization with an anatomical Bowhser prior resulted in sharp images with high contrast, limited image distortion, and low N% of 8.3% in low count images, although some image artifacts did occur. Analysis of clinical images suggested that the same effects occur in clinical imaging. Image quality of low count SPECT acquisitions reconstructed with modern 3DOSEM algorithms is deteriorated by the occurrence of correlated noise patterns and image distortions. The artifacts observed in the phantom experiments can also occur in clinical imaging. Copyright © 2015. Published by Elsevier GmbH.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
NASA Astrophysics Data System (ADS)
Zare, Ehsan; Huang, Jingyi; Koganti, Triven; Triantafilis, John
2017-04-01
In order to understand the drivers of topsoil salinization, the distribution and movement of salt in accordance with groundwater need mapping. In this study, we described a method to map the distribution of soil salinity, as measured by the electrical conductivity of a saturated soil-paste extract (ECe), and in 3-dimensions around a water storage reservoir in an irrigated field near Bourke, New South Wales, Australia. A quasi-3d electromagnetic conductivity image (EMCI) or model of the true electrical conductivity (sigma) was developed using 133 apparent electrical conductivity (ECa) measurements collected on a 50 m grid and using various coil arrays of DUALEM-421S and EM34 instruments. For the DUALEM-421S we considered ECa in horizontal coplanar (i.e., 1 mPcon, 2 mPcon and 4 mPcon) and vertical coplanar (i.e., 1 mHcon, 2 mHcon and 4 mHcon) arrays. For the EM34, three measurements in the horizontal mode (i.e., EM34-10H, EM34-20H and EM34-40H) were considered. We estimated σ using a quasi-3d joint-inversion algorithm (EM4Soil). The best correlation (R2 = 0.92) between σ and measured soil ECe was identified when a forward modelling (FS), inversion algorithm (S2) and damping factor (lambda = 0.2) were used and using both DUALEM-421 and EM34 data; but not including the 4 m coil arrays of the DUALEM-421S. A linear regression calibration model was used to predict ECe in 3-dimensions beneath the study field. The predicted ECe was consistent with previous studies and revealed the distribution of ECe and helped to infer a freshwater intrusion from a water storage reservoir at depth and as a function of its proximity to near-surface prior stream channels and buried paleochannels. It was concluded that this method can be applied elsewhere to map the soil salinity and water movement and provide guidance for improved land management.|
Huang, J; Koganti, T; Santos, F A Monteiro; Triantafilis, J
2017-01-15
In order to understand the drivers of topsoil salinization, the distribution and movement of salt in accordance with groundwater need mapping. In this study, we described a method to map the distribution of soil salinity, as measured by the electrical conductivity of a saturated soil-paste extract (EC e ), and in 3-dimensions around a water storage reservoir in an irrigated field near Bourke, New South Wales, Australia. A quasi-3d electromagnetic conductivity image (EMCI) or model of the true electrical conductivity (σ) was developed using 133 apparent electrical conductivity (EC a ) measurements collected on a 50m grid and using various coil arrays of DUALEM-421S and EM34 instruments. For the DUALEM-421S we considered EC a in horizontal coplanar (i.e., 1mPcon, 2mPcon and 4mPcon) and vertical coplanar (i.e., 1mHcon, 2mHcon and 4mHcon) arrays. For the EM34, three measurements in the horizontal mode (i.e., EM34-10H, EM34-20H and EM34-40H) were considered. We estimated σ using a quasi-3d joint-inversion algorithm (EM4Soil). The best correlation (R 2 =0.92) between σ and measured soil EC e was identified when a forward modelling (FS), inversion algorithm (S2) and damping factor (λ=0.2) were used and using both DUALEM-421 and EM34 data; but not including the 4m coil arrays of the DUALEM-421S. A linear regression calibration model was used to predict EC e in 3-dimensions beneath the study field. The predicted EC e was consistent with previous studies and revealed the distribution of EC e and helped to infer a freshwater intrusion from a water storage reservoir at depth and as a function of its proximity to near-surface prior stream channels and buried paleochannels. It was concluded that this method can be applied elsewhere to map the soil salinity and water movement and provide guidance for improved land management. Copyright © 2016 Elsevier B.V. All rights reserved.
Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Bartel, L. C.; Knox, H. A.
2013-12-01
Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common in industrial environments (borehole casing, pipes, railroad tracks). Present efforts are oriented toward calculating the EM responses of these objects via a First Born Approximation approach. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N
2016-04-01
Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.
Missing value imputation: with application to handwriting data
NASA Astrophysics Data System (ADS)
Xu, Zhen; Srihari, Sargur N.
2015-01-01
Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.
NASA Astrophysics Data System (ADS)
Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent
2016-03-01
The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Network Data: Statistical Theory and New Models
2016-02-17
SECURITY CLASSIFICATION OF: During this period of review, Bin Yu worked on many thrusts of high-dimensional statistical theory and methodologies. Her...research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...2,7,8,21], sparse modeling (e.g. Lasso) [4,10,11,17,18,19], statistical guarantees for the EM algorithm [3], statistical analysis of algorithm leveraging
Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing
2014-01-01
Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
The effects of center of rotation errors on cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.
2003-10-01
In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.
What does fault tolerant Deep Learning need from MPI?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.
Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for amore » fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.« less
Enumeration of Smallest Intervention Strategies in Genome-Scale Metabolic Networks
von Kamp, Axel; Klamt, Steffen
2014-01-01
One ultimate goal of metabolic network modeling is the rational redesign of biochemical networks to optimize the production of certain compounds by cellular systems. Although several constraint-based optimization techniques have been developed for this purpose, methods for systematic enumeration of intervention strategies in genome-scale metabolic networks are still lacking. In principle, Minimal Cut Sets (MCSs; inclusion-minimal combinations of reaction or gene deletions that lead to the fulfilment of a given intervention goal) provide an exhaustive enumeration approach. However, their disadvantage is the combinatorial explosion in larger networks and the requirement to compute first the elementary modes (EMs) which itself is impractical in genome-scale networks. We present MCSEnumerator, a new method for effective enumeration of the smallest MCSs (with fewest interventions) in genome-scale metabolic network models. For this we combine two approaches, namely (i) the mapping of MCSs to EMs in a dual network, and (ii) a modified algorithm by which shortest EMs can be effectively determined in large networks. In this way, we can identify the smallest MCSs by calculating the shortest EMs in the dual network. Realistic application examples demonstrate that our algorithm is able to list thousands of the most efficient intervention strategies in genome-scale networks for various intervention problems. For instance, for the first time we could enumerate all synthetic lethals in E.coli with combinations of up to 5 reactions. We also applied the new algorithm exemplarily to compute strain designs for growth-coupled synthesis of different products (ethanol, fumarate, serine) by E.coli. We found numerous new engineering strategies partially requiring less knockouts and guaranteeing higher product yields (even without the assumption of optimal growth) than reported previously. The strength of the presented approach is that smallest intervention strategies can be quickly calculated and screened with neither network size nor the number of required interventions posing major challenges. PMID:24391481
Visualizing staggered fields and analyzing electromagnetic data with PerceptEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shasharina, Svetlana
This project resulted in VSimSP: a software for simulating large photonic devices of high-performance computers. It includes: GUI for Photonics Simulations; High-Performance Meshing Algorithm; 2d Order Multimaterials Algorithm; Mode Solver for Waveguides; 2d Order Material Dispersion Algorithm; S Parameters Calculation; High-Performance Workflow at NERSC ; and Large Photonic Devices Simulation Setups We believe we became the only company in the world which can simulate large photonics devices in 3D on modern supercomputers without the need to split them into subparts or do low-fidelity modeling. We started commercial engagement with a manufacturing company.
Zhu, Yanan; Ouyang, Qi; Mao, Youdong
2017-07-21
Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.
Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.
2016-01-01
We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu; Zhang, Jiahan
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work.more » Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data.« less
Li, Si; Zhang, Jiahan; Krol, Andrzej; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin; Lipson, Edward; Feiglin, David; Xu, Yuesheng
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data. PMID:26233214
Image Quality Performance Measurement of the microPET Focus 120
NASA Astrophysics Data System (ADS)
Ballado, Fernando Trejo; López, Nayelli Ortega; Flores, Rafael Ojeda; Ávila-Rodríguez, Miguel A.
2010-12-01
The aim of this work is to evaluate the characteristics involved in the image reconstruction of the microPET Focus 120. For this evaluation were used two different phantoms; a miniature hot-rod Derenzo phantom and a National Electrical Manufacturers Association (NEMA) NU4-2008 image quality (IQ) phantom. The best image quality was obtained when using OSEM3D as the reconstruction method reaching a spatial resolution of 1.5 mm with the Derenzo phantom filled with 18F. Image quality test results indicate a superior image quality for the Focus 120 when compared to previous microPET models.
Robustness of methods for blinded sample size re-estimation with overdispersed count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-09-20
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.
Adaptive Neuron Apoptosis for Accelerating Deep Learning on Large Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, Charles M.; Daily, Jeffrey A.; Vishnu, Abhinav
Machine Learning and Data Mining (MLDM) algorithms are becoming ubiquitous in {\\em model learning} from the large volume of data generated using simulations, experiments and handheld devices. Deep Learning algorithms -- a class of MLDM algorithms -- are applied for automatic feature extraction, and learning non-linear models for unsupervised and supervised algorithms. Naturally, several libraries which support large scale Deep Learning -- such as TensorFlow and Caffe -- have become popular. In this paper, we present novel techniques to accelerate the convergence of Deep Learning algorithms by conducting low overhead removal of redundant neurons -- {\\em apoptosis} of neurons --more » which do not contribute to model learning, during the training phase itself. We provide in-depth theoretical underpinnings of our heuristics (bounding accuracy loss and handling apoptosis of several neuron types), and present the methods to conduct adaptive neuron apoptosis. We implement our proposed heuristics with the recently introduced TensorFlow and using its recently proposed extension with MPI. Our performance evaluation on two difference clusters -- one connected with Intel Haswell multi-core systems, and other with nVIDIA GPUs -- using InfiniBand, indicates the efficacy of the proposed heuristics and implementations. Specifically, we are able to improve the training time for several datasets by 2-3x, while reducing the number of parameters by 30x (4-5x on average) on datasets such as ImageNet classification. For the Higgs Boson dataset, our implementation improves the accuracy (measured by Area Under Curve (AUC)) for classification from 0.88/1 to 0.94/1, while reducing the number of parameters by 3x in comparison to existing literature, while achieving a 2.44x speedup in comparison to the default (no apoptosis) algorithm.« less
A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing
2018-01-01
To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.
Adaptive control of nonlinear system using online error minimum neural networks.
Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei
2016-11-01
In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems
NASA Astrophysics Data System (ADS)
Goossens, Bart; Luong, Hiêp; Philips, Wilfried
2017-08-01
Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.
EM Algorithm for Mapping Quantitative Trait Loci in Multivalent Tetraploids
USDA-ARS?s Scientific Manuscript database
Multivalent tetraploids that include many plant species, such as potato, sugarcane and rose, are of paramount importance to agricultural production and biological research. Quantitative trait locus (QTL) mapping in multivalent tetraploids is challenged by their unique cytogenetic properties, such ...
Software for Data Analysis with Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Roy, H. Scott
1994-01-01
Probabilistic graphical models are being used widely in artificial intelligence and statistics, for instance, in diagnosis and expert systems, as a framework for representing and reasoning with probabilities and independencies. They come with corresponding algorithms for performing statistical inference. This offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper illustrates the framework with an example and then presents some basic techniques for the task: problem decomposition and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
NASA Astrophysics Data System (ADS)
King, M.; Boening, Guido; Baker, S.; Steinmetz, N.
2004-10-01
In current clinical oncology practice, it often takes weeks or months of cancer therapy until a response to treatment can be identified by evaluation of tumor size in images. It is hypothesized that changes in relative localization of the apoptosis imaging agent Tc-99m Annexin before and after the administration of chemotherapy may be useful as an early indicator of the success of therapy. The objective of this study was to determine the minimum relative change in tumor localization that could be confidently determined as an increased localization. A modified version of the Data Spectrum Anthropomorphic Torso phantom, in which four spheres could be positioned in the lung region, was filled with organ concentrations of Tc-99m representative of those observed in clinical imaging of Tc-99m Annexin. Five acquisitions of an initial sphere to lung concentration, and at concentrations of 1.1, 1.2, 1.3, and 1.4 times the initial concentration, were acquired at clinically realistic count levels. The acquisitions were reconstructed by filtered backprojection, ordered subset expectation maximization (OSEM) without attenuation compensation (AC), and OSEM with AC. Permutation methodology was used to create multiple region-of-interest count ratios from the five noise realizations at each concentration and between the elevated and initial concentrations. The resulting distributions were approximated by Gaussians, which were then used to estimate the likelihood of Type 1 and Type 2 Errors. It was determined that for the cases investigated, greater than a 20% to 30% or more increase was needed to confidently determine that an increase in localization had occurred depending on sphere size and reconstruction strategy.
Digital PET compliance to EARL accreditation specifications.
Koopman, Daniëlle; Groot Koerkamp, Maureen; Jager, Pieter L; Arkies, Hester; Knollema, Siert; Slump, Cornelis H; Sanches, Pedro G; van Dalen, Jorn A
2017-12-01
Our aim was to evaluate if a recently introduced TOF PET system with digital photon counting technology (Philips Healthcare), potentially providing an improved image quality over analogue systems, can fulfil EANM research Ltd (EARL) accreditation specifications for tumour imaging with FDG-PET/CT. We have performed a phantom study on a digital TOF PET system using a NEMA NU2-2001 image quality phantom with six fillable spheres. Phantom preparation and PET/CT acquisition were performed according to the European Association of Nuclear Medicine (EANM) guidelines. We made list-mode ordered-subsets expectation maximization (OSEM) TOF PET reconstructions, with default settings, three voxel sizes (4 × 4 × 4 mm 3 , 2 × 2 × 2 mm 3 and 1 × 1 × 1 mm 3 ) and with/without point spread function (PSF) modelling. On each PET dataset, mean and maximum activity concentration recovery coefficients (RC mean and RC max ) were calculated for all phantom spheres and compared to EARL accreditation specifications. The RCs of the 4 × 4 × 4 mm 3 voxel dataset without PSF modelling proved closest to EARL specifications. Next, we added a Gaussian post-smoothing filter with varying kernel widths of 1-7 mm. EARL specifications were fulfilled when using kernel widths of 2 to 4 mm. TOF PET using digital photon counting technology fulfils EARL accreditation specifications for FDG-PET/CT tumour imaging when using an OSEM reconstruction with 4 × 4 × 4 mm 3 voxels, no PSF modelling and including a Gaussian post-smoothing filter of 2 to 4 mm.
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Active sensors for health monitoring of aging aerospace structures
NASA Astrophysics Data System (ADS)
Giurgiutiu, Victor; Redmond, James M.; Roach, Dennis P.; Rackow, Kirk
2000-06-01
A project to develop non-intrusive active sensors that can be applied on existing aging aerospace structures for monitoring the onset and progress of structural damage (fatigue cracks and corrosion) is presented. The state of the art in active sensors structural health monitoring and damage detection is reviewed. Methods based on (a) elastic wave propagation and (b) electro-mechanical (E/M) impedance technique are cited and briefly discussed. The instrumentation of these specimens with piezoelectric active sensors is illustrated. The main detection strategies (E/M impedance for local area detection and wave propagation for wide area interrogation) are discussed. The signal processing and damage interpretation algorithms are tuned to the specific structural interrogation method used. In the high frequency E/M impedance approach, pattern recognition methods are used to compare impedance signatures taken at various time intervals and to identify damage presence and progression from the change in these signatures. In the wave propagation approach, the acousto- ultrasonic methods identifying additional reflection generated from the damage site and changes in transmission velocity and phase are used. Both approaches benefit from the use of artificial intelligence neural networks algorithms that can extract damage features based on a learning process. Design and fabrication of a set of structural specimens representative of aging aerospace structures is presented. Three built-up specimens, (pristine, with cracks, and with corrosion damage) are used. The specimen instrumentation with active sensors fabricated at the University of South Carolina is illustrated. Preliminary results obtained with the E/M impedance method on pristine and cracked specimens are presented.
Newgard, Craig D; Kampp, Michael; Nelson, Maria; Holmes, James F; Zive, Dana; Rea, Thomas; Bulger, Eileen M; Liao, Michael; Sherck, John; Hsia, Renee Y; Wang, N Ewen; Fleischman, Ross J; Barton, Erik D; Daya, Mohamud; Heineman, John; Kuppermann, Nathan
2012-05-01
"Emergency medical services (EMS) provider judgment" was recently added as a field triage criterion to the national guidelines, yet its predictive value and real world application remain unclear. We examine the use and independent predictive value of EMS provider judgment in identifying seriously injured persons. We analyzed a population-based retrospective cohort, supplemented by qualitative analysis, of injured children and adults evaluated and transported by 47 EMS agencies to 94 hospitals in five regions across the Western United States from 2006 to 2008. We used logistic regression models to evaluate the independent predictive value of EMS provider judgment for Injury Severity Score ≥ 16. EMS narratives were analyzed using qualitative methods to assess and compare common themes for each step in the triage algorithm, plus EMS provider judgment. 213,869 injured patients were evaluated and transported by EMS over the 3-year period, of whom 41,191 (19.3%) met at least one of the field triage criteria. EMS provider judgment was the most commonly used triage criterion (40.0% of all triage-positive patients; sole criterion in 21.4%). After accounting for other triage criteria and confounders, the adjusted odds ratio of Injury Severity Score ≥ 16 for EMS provider judgment was 1.23 (95% confidence interval, 1.03-1.47), although there was variability in predictive value across sites. Patients meeting EMS provider judgment had concerning clinical presentations qualitatively similar to those meeting mechanistic and other special considerations criteria. Among this multisite cohort of trauma patients, EMS provider judgment was the most commonly used field trauma triage criterion, independently associated with serious injury, and useful in identifying high-risk patients missed by other criteria. However, there was variability in predictive value between sites.
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-12-31
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
NASA Astrophysics Data System (ADS)
Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar
2016-06-01
The aim of this paper is to design a current source obtained as a representation of p information symbols \\{I_k\\} so that the electromagnetic (EM) field generated interacts with a quantum atomic system producing after a fixed duration T a unitary gate U( T) that is as close as possible to a given unitary gate U_g. The design procedure involves calculating the EM field produced by \\{I_k\\} and hence the perturbing Hamiltonian produced by \\{I_k\\} finally resulting in the evolution operator produced by \\{I_k\\} up to cubic order based on the Dyson series expansion. The gate error energy is thus obtained as a cubic polynomial in \\{I_k\\} which is minimized using gravitational search algorithm. The signal to noise ratio (SNR) in the designed gate is higher as compared to that using quadratic Dyson series expansion. The SNR is calculated as the ratio of the Frobenius norm square of the desired gate to that of the desired gate error.
Electromagnetic gyrokinetic simulation in GTS
NASA Astrophysics Data System (ADS)
Ma, Chenhao; Wang, Weixing; Startsev, Edward; Lee, W. W.; Ethier, Stephane
2017-10-01
We report the recent development in the electromagnetic simulations for general toroidal geometry based on the particle-in-cell gyrokinetic code GTS. Because of the cancellation problem, the EM gyrokinetic simulation has numerical difficulties in the MHD limit where k⊥ρi -> 0 and/or β >me /mi . Recently several approaches has been developed to circumvent this problem: (1) p∥ formulation with analytical skin term iteratively approximated by simulation particles (Yang Chen), (2) A modified p∥ formulation with ∫ dtE∥ used in place of A∥ (Mishichenko); (3) A conservative theme where the electron density perturbation for the Poisson equation is calculated from an electron continuity equation (Bao) ; (4) double-split-weight scheme with two weights, one for Poisson equation and one for time derivative of Ampere's law, each with different splits designed to remove large terms from Vlasov equation (Startsev). These algorithms are being implemented into GTS framework for general toroidal geometry. The performance of these different algorithms will be compared for various EM modes.
Alpert, Abby; Morganti, Kristy G; Margolis, Gregg S; Wasserman, Jeffrey; Kellermann, Arthur L
2013-12-01
Some Medicare beneficiaries who place 911 calls to request an ambulance might safely be cared for in settings other than the emergency department (ED) at lower cost. Using 2005-09 Medicare claims data and a validated algorithm, we estimated that 12.9-16.2 percent of Medicare-covered 911 emergency medical services (EMS) transports involved conditions that were probably nonemergent or primary care treatable. Among beneficiaries not admitted to the hospital, about 34.5 percent had a low-acuity diagnosis that might have been managed outside the ED. Annual Medicare EMS and ED payments for these patients were approximately $1 billion per year. If Medicare had the flexibility to reimburse EMS for managing selected 911 calls in ways other than transport to an ED, we estimate that the federal government could save $283-$560 million or more per year, while improving the continuity of patient care. If private insurance companies followed suit, overall societal savings could be twice as large.
Fusing Continuous-Valued Medical Labels Using a Bayesian Model.
Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D
2015-12-01
With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Antenna analysis using neural networks
NASA Technical Reports Server (NTRS)
Smith, William T.
1992-01-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.
Antenna analysis using neural networks
NASA Astrophysics Data System (ADS)
Smith, William T.
1992-09-01
Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).
Rusu, Mirabela; Birmanns, Stefan
2010-04-01
A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions. (c) 2009 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Werner, Frank; Wind, Galina; Zhang, Zhibo; Platnick, Steven; Di Girolamo, Larry; Zhao, Guangyu; Amarasinghe, Nandana; Meyer, Kerry
2016-12-01
A research-level retrieval algorithm for cloud optical and microphysical properties is developed for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) aboard the Terra satellite. It is based on the operational MODIS algorithm. This paper documents the technical details of this algorithm and evaluates the retrievals for selected marine boundary layer cloud scenes through comparisons with the operational MODIS Data Collection 6 (C6) cloud product. The newly developed, ASTER-specific cloud masking algorithm is evaluated through comparison with an independent algorithm reported in [Zhao and Di Girolamo(2006)]. To validate and evaluate the cloud optical thickness (τ) and cloud effective radius (reff) from ASTER, the high-spatial-resolution ASTER observations are first aggregated to the same 1000 m resolution as MODIS. Subsequently, τaA and reff,
YANA – a software tool for analyzing flux modes, gene-expression and enzyme activities
Schwarz, Roland; Musch, Patrick; von Kamp, Axel; Engels, Bernd; Schirmer, Heiner; Schuster, Stefan; Dandekar, Thomas
2005-01-01
Background A number of algorithms for steady state analysis of metabolic networks have been developed over the years. Of these, Elementary Mode Analysis (EMA) has proven especially useful. Despite its low user-friendliness, METATOOL as a reliable high-performance implementation of the algorithm has been the instrument of choice up to now. As reported here, the analysis of metabolic networks has been improved by an editor and analyzer of metabolic flux modes. Analysis routines for expression levels and the most central, well connected metabolites and their metabolic connections are of particular interest. Results YANA features a platform-independent, dedicated toolbox for metabolic networks with a graphical user interface to calculate (integrating METATOOL), edit (including support for the SBML format), visualize, centralize, and compare elementary flux modes. Further, YANA calculates expected flux distributions for a given Elementary Mode (EM) activity pattern and vice versa. Moreover, a dissection algorithm, a centralization algorithm, and an average diameter routine can be used to simplify and analyze complex networks. Proteomics or gene expression data give a rough indication of some individual enzyme activities, whereas the complete flux distribution in the network is often not known. As such data are noisy, YANA features a fast evolutionary algorithm (EA) for the prediction of EM activities with minimum error, including alerts for inconsistent experimental data. We offer the possibility to include further known constraints (e.g. growth constraints) in the EA calculation process. The redox metabolism around glutathione reductase serves as an illustration example. All software and documentation are available for download at . Conclusion A graphical toolbox and an editor for METATOOL as well as a series of additional routines for metabolic network analyses constitute a new user-friendly software for such efforts. PMID:15929789
Very low-dose adult whole-body tumor imaging with F-18 FDG PET/CT
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Naveed, Muhammad; McGrath, Mary; Lisi, Michele; Lavalley, Cathy; Feiglin, David
2015-03-01
The aim of this study was to evaluate if effective radiation dose due to PET component in adult whole-body tumor imaging with time-of-flight F-18 FDG PET/CT could be significantly reduced. We retrospectively analyzed data for 10 patients with the body mass index ranging from 25 to 50. We simulated F-18 FDG dose reduction to 25% of the ACR recommended dose via reconstruction of simulated shorter acquisition time per bed position scans from the acquired list data. F-18 FDG whole-body scans were reconstructed using time-of-flight OSEM algorithm and advanced system modeling. Two groups of images were obtained: group A with a standard dose of F-18 FDG and standard reconstruction parameters and group B with simulated 25% dose and modified reconstruction parameters, respectively. Three nuclear medicine physicians blinded to the simulated activity independently reviewed the images and compared diagnostic quality of images. Based on the input from the physicians, we selected optimal modified reconstruction parameters for group B. In so obtained images, all the lesions observed in the group A were visible in the group B. The tumor SUV values were different in the group A, as compared to group B, respectively. However, no significant differences were reported in the final interpretation of the images from A and B groups. In conclusion, for a small number of patients, we have demonstrated that F-18 FDG dose reduction to 25% of the ACR recommended dose, accompanied by appropriate modification of the reconstruction parameters provided adequate diagnostic quality of PET images acquired on time-of-flight PET/CT.
NASA Astrophysics Data System (ADS)
Wells, R. G.; Gifford, H. C.; Pretorius, P. H.; Famcombe, T. H.; Narayanan, M. V.; King, M. A.
2002-06-01
We have demonstrated an improvement due to attenuation correction (AC) at the task of lesion detection in thoracic SPECT images. However, increased noise in the transmission data due to aging sources or very large patients, and misregistration of the emission and transmission maps, can reduce the accuracy of the AC and may result in a loss of lesion detectability. We investigated the impact of noise in and misregistration of transmission data, on the detection of simulated Ga-67 thoracic lesions. Human-observer localization-receiver-operating-characteristic (LROC) methodology was used to assess performance. Both emission and transmission data were simulated using the MCAT computer phantom. Emission data were reconstructed using OSEM incorporating AC and detector resolution compensation. Clinical noise levels were used in the emission data. The transmission-data noise levels ranged from zero (noise-free) to 32 times the measured clinical levels. Transaxial misregistrations of 0.32, 0.63, and 1.27 cm between emission and transmission data were also examined. Three different algorithms were considered for creating the attenuation maps: filtered backprojection (FBP), unbounded maximum-likelihood (ML), and block-iterative transmission AB (BITAB). Results indicate that a 16-fold increase in the noise was required to eliminate the benefit afforded by AC, when using FBP or ML to reconstruct the attenuation maps. When using BITAB, no significant loss in performance was observed for a 32-fold increase in noise. Misregistration errors are also a concern as even small errors here reduce the performance gains of AC.
Study on fluorescence spectra of thiamine, riboflavin and pyridoxine
NASA Astrophysics Data System (ADS)
Yang, Hui; Xiao, Xue; Zhao, Xuesong; Hu, Lan; Lv, Caofang; Yin, Zhangkun
2016-01-01
This paper presents the intrinsic fluorescence characteristics of vitamin B1, B2 and B6 measured with 3D fluorescence Spectrophotometer. Three strong fluorescence areas of vitamin B2 locate at λex/λem=270/525nm, 370/525nm and 450/525nm, one fluorescence areas of vitamin B1 locates at λex/λem=370/460nm, two fluorescence areas of vitamin B6 locate at λex/λem=250/370nm and 325/370nm were found. The influence of pH of solution to the fluorescence profile was also discussed. Using the PARAFAC algorithm, 10 vitamin B1, B2 and B6 mixed solutions were successfully decomposed, and the emission profiles, excitation profiles, central wavelengths and the concentration of the three components were retrieved precisely through about 5 iteration times.
Nenna, Vanessa; Herckenrather, Daan; Knight, Rosemary; Odlum, Nick; McPhee, Darcy
2013-01-01
Developing effective resource management strategies to limit or prevent saltwater intrusion as a result of increasing demands on coastal groundwater resources requires reliable information about the geologic structure and hydrologic state of an aquifer system. A common strategy for acquiring such information is to drill sentinel wells near the coast to monitor changes in water salinity with time. However, installation and operation of sentinel wells is costly and provides limited spatial coverage. We studied the use of noninvasive electromagnetic (EM) geophysical methods as an alternative to installation of monitoring wells for characterizing coastal aquifers. We tested the feasibility of using EM methods at a field site in northern California to identify the potential for and/or presence of hydraulic communication between an unconfined saline aquifer and a confined freshwater aquifer. One-dimensional soundings were acquired using the time-domain electromagnetic (TDEM) and audiomagnetotelluric (AMT) methods. We compared inverted resistivity models of TDEM and AMT data obtained from several inversion algorithms. We found that multiple interpretations of inverted models can be supported by the same data set, but that there were consistencies between all data sets and inversion algorithms. Results from all collected data sets suggested that EM methods are capable of reliably identifying a saltwater-saturated zone in the unconfined aquifer. Geophysical data indicated that the impermeable clay between aquifers may be more continuous than is supported by current models.
Fully anisotropic 3-D EM modelling on a Lebedev grid with a multigrid pre-conditioner
NASA Astrophysics Data System (ADS)
Jaysaval, Piyoosh; Shantsev, Daniil V.; de la Kethulle de Ryhove, Sébastien; Bratteland, Tarjei
2016-12-01
We present a numerical algorithm for 3-D electromagnetic (EM) simulations in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3-D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For synthetic data corresponding to a 3-D model with a TTI anticlinal structure, a standard vertical transverse isotropic (VTI) inversion is not able to image a resistor, while for a 3-D model with a TTI synclinal structure it produces a false resistive anomaly. However, if the VTI forward solver used in the inversion is replaced by the proposed TTI solver with perfect knowledge of the strike and dip of the dipping structures, the resulting resistivity images become consistent with the true models.
Cryo-EM of dynamic protein complexes in eukaryotic DNA replication.
Sun, Jingchuan; Yuan, Zuanning; Bai, Lin; Li, Huilin
2017-01-01
DNA replication in Eukaryotes is a highly dynamic process that involves several dozens of proteins. Some of these proteins form stable complexes that are amenable to high-resolution structure determination by cryo-EM, thanks to the recent advent of the direct electron detector and powerful image analysis algorithm. But many of these proteins associate only transiently and flexibly, precluding traditional biochemical purification. We found that direct mixing of the component proteins followed by 2D and 3D image sorting can capture some very weakly interacting complexes. Even at 2D average level and at low resolution, EM images of these flexible complexes can provide important biological insights. It is often necessary to positively identify the feature-of-interest in a low resolution EM structure. We found that systematically fusing or inserting maltose binding protein (MBP) to selected proteins is highly effective in these situations. In this chapter, we describe the EM studies of several protein complexes involved in the eukaryotic DNA replication over the past decade or so. We suggest that some of the approaches used in these studies may be applicable to structural analysis of other biological systems. © 2016 The Protein Society.
NASA Astrophysics Data System (ADS)
Hendricks, S.; Hoppmann, M.; Hunkeler, P. A.; Kalscheuer, T.; Gerdes, R.
2015-12-01
In Antarctica, ice crystals (platelets) form and grow in supercooled waters below ice shelves. These platelets rise and accumulate beneath nearby sea ice to form a several meter thick sub-ice platelet layer. This special ice type is a unique habitat, influences sea-ice mass and energy balance, and its volume can be interpreted as an indicator for ice - ocean interactions. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In the present study, we applied a lateral constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the ice-shelf influenced fast-ice regime of Atka Bay, eastern Weddell Sea. We adapted the inversion algorithm to incorporate a sensor specific signal bias, and confirmed the reliability of the algorithm by performing a sensitivity study using synthetic data. We inverted the field data for sea-ice and sub-ice platelet-layer thickness and electrical conductivity, and calculated ice-volume fractions from platelet-layer conductivities using Archie's Law. The thickness results agreed well with drill-hole validation datasets within the uncertainty range, and the ice-volume fraction also yielded plausible results. Our findings imply that multi-frequency EM induction sounding is a suitable approach to efficiently map sea-ice and platelet-layer properties. However, we emphasize that the successful application of this technique requires a break with traditional EM sensor calibration strategies due to the need of absolute calibration with respect to a physical forward model.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
A Review of Methods for Missing Data.
ERIC Educational Resources Information Center
Pigott, Therese D.
2001-01-01
Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…
Local Influence Analysis of Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Lee, Sik-Yum; Tang, Nian-Sheng
2004-01-01
By regarding the latent random vectors as hypothetical missing data and based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm, we investigate assessment of local influence of various perturbation schemes in a nonlinear structural equation model. The basic building blocks of local influence analysis…
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Multilevel Analysis of Structural Equation Models via the EM Algorithm.
ERIC Educational Resources Information Center
Jo, See-Heyon
The question of how to analyze unbalanced hierarchical data generated from structural equation models has been a common problem for researchers and analysts. Among difficulties plaguing statistical modeling are estimation bias due to measurement error and the estimation of the effects of the individual's hierarchical social milieu. This paper…
Robust numerical electromagnetic eigenfunction expansion algorithms
NASA Astrophysics Data System (ADS)
Sainath, Kamalesh
This thesis summarizes developments in rigorous, full-wave, numerical spectral-domain (integral plane wave eigenfunction expansion [PWE]) evaluation algorithms concerning time-harmonic electromagnetic (EM) fields radiated by generally-oriented and positioned sources within planar and tilted-planar layered media exhibiting general anisotropy, thickness, layer number, and loss characteristics. The work is motivated by the need to accurately and rapidly model EM fields radiated by subsurface geophysical exploration sensors probing layered, conductive media, where complex geophysical and man-made processes can lead to micro-laminate and micro-fractured geophysical formations exhibiting, at the lower (sub-2MHz) frequencies typically employed for deep EM wave penetration through conductive geophysical media, bulk-scale anisotropic (i.e., directional) electrical conductivity characteristics. When the planar-layered approximation (layers of piecewise-constant material variation and transversely-infinite spatial extent) is locally, near the sensor region, considered valid, numerical spectral-domain algorithms are suitable due to their strong low-frequency stability characteristic, and ability to numerically predict time-harmonic EM field propagation in media with response characterized by arbitrarily lossy and (diagonalizable) dense, anisotropic tensors. If certain practical limitations are addressed, PWE can robustly model sensors with general position and orientation that probe generally numerous, anisotropic, lossy, and thick layers. The main thesis contributions, leading to a sensor and geophysical environment-robust numerical modeling algorithm, are as follows: (1) Simple, rapid estimator of the region (within the complex plane) containing poles, branch points, and branch cuts (critical points) (Chapter 2), (2) Sensor and material-adaptive azimuthal coordinate rotation, integration contour deformation, integration domain sub-region partition and sub-region-dependent integration order (Chapter 3), (3) Integration partition-extrapolation-based (Chapter 3) and Gauss-Laguerre Quadrature (GLQ)-based (Chapter 4) evaluations of the deformed, semi-infinite-length integration contour tails, (4) Robust in-situ-based (i.e., at the spectral-domain integrand level) direct/homogeneous-medium field contribution subtraction and analytical curbing of the source current spatial spectrum function's ill behavior (Chapter 5), and (5) Analytical re-casting of the direct-field expressions when the source is embedded within a NBAM, short for non-birefringent anisotropic medium (Chapter 6). The benefits of these contributions are, respectively, (1) Avoiding computationally intensive critical-point location and tracking (computation time savings), (2) Sensor and material-robust curbing of the integrand's oscillatory and slow decay behavior, as well as preventing undesirable critical-point migration within the complex plane (computation speed, precision, and instability-avoidance benefits), (3) sensor and material-robust reduction (or, for GLQ, elimination) of integral truncation error, (4) robustly stable modeling of scattered fields and/or fields radiated from current sources modeled as spatially distributed (10 to 1000-fold compute-speed acceleration also realized for distributed-source computations), and (5) numerically stable modeling of fields radiated from sources within NBAM layers. Having addressed these limitations, are PWE algorithms applicable to modeling EM waves in tilted planar-layered geometries too? This question is explored in Chapter 7 using a Transformation Optics-based approach, allowing one to model wave propagation through layered media that (in the sensor's vicinity) possess tilted planar interfaces. The technique leads to spurious wave scattering however, whose induced computation accuracy degradation requires analysis. Mathematical exhibition, and exhaustive simulation-based study and analysis of the limitations of, this novel tilted-layer modeling formulation is Chapter 7's main contribution.
X-rays in the Cryo-EM Era: Structural Biology’s Dynamic Future
Shoemaker, Susannah C.; Ando, Nozomi
2018-01-01
Over the past several years, single-particle cryo-electron microscopy (cryo-EM) has emerged as a leading method for elucidating macromolecular structures at near-atomic resolution, rivaling even the established technique of X-ray crystallography. Cryo-EM is now able to probe proteins as small as hemoglobin (64 kDa), while avoiding the crystallization bottleneck entirely. The remarkable success of cryo-EM has called into question the continuing relevance of X-ray methods, particularly crystallography. To say that the future of structural biology is either cryo-EM or crystallography, however, would be misguided. Crystallography remains better suited to yield precise atomic coordinates of macromolecules under a few hundred kDa in size, while the ability to probe larger, potentially more disordered assemblies is a distinct advantage of cryo-EM. Likewise, crystallography is better equipped to provide high-resolution dynamic information as a function of time, temperature, pressure, and other perturbations, whereas cryo-EM offers increasing insight into conformational and energy landscapes, particularly as algorithms to deconvolute conformational heterogeneity become more advanced. Ultimately, the future of both techniques depends on how their individual strengths are utilized to tackle questions on the frontiers of structural biology. Structure determination is just one piece of a much larger puzzle: a central challenge of modern structural biology is to relate structural information to biological function. In this perspective, we share insight from several leaders in the field and examine the unique and complementary ways in which X-ray methods and cryo-EM can shape the future of structural biology. PMID:29227642
NASA Astrophysics Data System (ADS)
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.
2017-07-01
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.
Quantitative and Qualitative Assessment of Yttrium-90 PET/CT Imaging
Büsing, Karen-Anett; Schönberg, Stefan O.; Bailey, Dale L.; Willowson, Kathy; Glatting, Gerhard
2014-01-01
Yttrium-90 is known to have a low positron emission decay of 32 ppm that may allow for personalized dosimetry of liver cancer therapy with 90Y labeled microspheres. The aim of this work was to image and quantify 90Y so that accurate predictions of the absorbed dose can be made. The measurements were performed within the QUEST study (University of Sydney, and Sirtex Medical, Australia). A NEMA IEC body phantom containing 6 fillable spheres (10–37 mm ∅) was used to measure the 90Y distribution with a Biograph mCT PET/CT (Siemens, Erlangen, Germany) with time-of-flight (TOF) acquisition. A sphere to background ratio of 8∶1, with a total 90Y activity of 3 GBq was used. Measurements were performed for one week (0, 3, 5 and 7 d). he acquisition protocol consisted of 30 min-2 bed positions and 120 min-single bed position. mages were reconstructed with 3D ordered subset expectation maximization (OSEM) and point spread function (PSF) for iteration numbers of 1–12 with 21 (TOF) and 24 (non-TOF) subsets and CT based attenuation and scatter correction. Convergence of algorithms and activity recovery was assessed based on regions-of-interest (ROI) analysis of the background (100 voxels), spheres (4 voxels) and the central low density insert (25 voxels). For the largest sphere, the recovery coefficient (RC) values for the 30 min –2-bed position, 30 min-single bed and 120 min-single bed were 1.12±0.20, 1.14±0.13, 0.97±0.07 respectively. For the smaller diameter spheres, the PSF algorithm with TOF and single bed acquisition provided a comparatively better activity recovery. Quantification of Y-90 using Biograph mCT PET/CT is possible with a reasonable accuracy, the limitations being the size of the lesion and the activity concentration present. At this stage, based on our study, it seems advantageous to use different protocols depending on the size of the lesion. PMID:25369020
NASA Astrophysics Data System (ADS)
Chen, Wei; Guo, Li-xin; Li, Jiang-ting
2017-04-01
This study analyzes the scattering characteristics of obliquely incident electromagnetic (EM) waves in a time-varying plasma sheath. The finite-difference time-domain algorithm is applied. According to the empirical formula of the collision frequency in a plasma sheath, the plasma frequency, temperature, and pressure are assumed to vary with time in the form of exponential rise. Some scattering problems of EM waves are discussed by calculating the radar cross section (RCS) of the time-varying plasma. The laws of the RCS varying with time are summarized at the L and S wave bands.
Intrinsic fluorescence spectra characteristics of vitamin B1, B2, and B6
NASA Astrophysics Data System (ADS)
Yang, Hui; Xiao, Xue; Zhao, Xuesong; Hu, Lan; Lv, Caofang; Yin, Zhangkun
2015-11-01
This paper presents the intrinsic fluorescence characteristics of vitamin B1, B2 and B6 measured with 3D fluorescence Spectrophotometer. Three strong fluorescence areas of vitamin B2 locate at λex/λem=270/525nm, 370/525nm and 450/525nm, one fluorescence areas of vitamin B1 locates at λex/λem=370/460nm, two fluorescence areas of vitamin B6 locates at λex/λem=250/370nm and 325/370nm were found. The influence of pH of solution to the fluorescence profile was also discussed. Using the PARAFAC algorithm, 10 vitamin B1, B2 and B6 mixed solutions were successfully decomposed, and the emission profiles, excitation profiles, central wavelengths and the concentration of the three components were retrieved precisely through about 5 iteration times.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Case-Deletion Diagnostics for Nonlinear Structural Equation Models
ERIC Educational Resources Information Center
Lee, Sik-Yum; Lu, Bin
2003-01-01
In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…
A Generalized Partial Credit Model: Application of an EM Algorithm.
ERIC Educational Resources Information Center
Muraki, Eiji
1992-01-01
The partial credit model with a varying slope parameter is developed and called the generalized partial credit model (GPCM). Analysis results for simulated data by this and other polytomous item-response models demonstrate that the rating formulation of the GPCM is adaptable to the analysis of polytomous item responses. (SLD)
Using Latent Class Analysis to Model Temperament Types
ERIC Educational Resources Information Center
Loken, Eric
2004-01-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks…
Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.
ERIC Educational Resources Information Center
Thomas, Neal; Gan, Nianci
1997-01-01
Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…
Locally Dependent Latent Trait Model and the Dutch Identity Revisited.
ERIC Educational Resources Information Center
Ip, Edward H.
2002-01-01
Proposes a class of locally dependent latent trait models for responses to psychological and educational tests. Focuses on models based on a family of conditional distributions, or kernel, that describes joint multiple item responses as a function of student latent trait, not assuming conditional independence. Also proposes an EM algorithm for…
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
Robb, Matthew L; Böhning, Dankmar
2011-02-01
Capture–recapture techniques have been used for considerable time to predict population size. Estimators usually rely on frequency counts for numbers of trappings; however, it may be the case that these are not available for a particular problem, for example if the original data set has been lost and only a summary table is available. Here, we investigate techniques for specific examples; the motivating example is an epidemiology study by Mosley et al., which focussed on a cholera outbreak in East Pakistan. To demonstrate the wider range of the technique, we also look at a study for predicting the long-term outlook of the AIDS epidemic using information on number of sexual partners. A new estimator is developed here which uses the EM algorithm to impute unobserved values and then uses these values in a similar way to the existing estimators. The results show that a truncated approach – mimicking the Chao lower bound approach – gives an improved estimate when population homogeneity is violated.
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H where V ~ WH. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this paper, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse Gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness-of-fit on data. Our methods are demonstrated using experimental data from electromyography studies as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise. PMID:24684448
EM reconstruction of dual isotope PET using staggered injections and prompt gamma positron emitters
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2014-01-01
Purpose: The aim of dual isotope positron emission tomography (DIPET) is to create two separate images of two coinjected PET radiotracers. DIPET shortens the duration of the study, reduces patient discomfort, and produces perfectly coregistered images compared to the case when two radiotracers would be imaged independently (sequential PET studies). Reconstruction of data from such simultaneous acquisition of two PET radiotracers is difficult because positron decay of any isotope creates only 511 keV photons; therefore, the isotopes cannot be differentiated based on the detected energy. Methods: Recently, the authors have proposed a DIPET technique that uses a combination of radiotracer A which is a pure positron emitter (such as 18F or 11C) and radiotracer B in which positron decay is accompanied by the emission of a high-energy (HE) prompt gamma (such as 38K or 60Cu). Events that are detected as triple coincidences of HE gammas with the corresponding two 511 keV photons allow the authors to identify the lines-of-response (LORs) of isotope B. These LORs are used to separate the two intertwined distributions, using a dedicated image reconstruction algorithm. In this work the authors propose a new version of the DIPET EM-based reconstruction algorithm that allows the authors to include an additional, independent estimate of radiotracer A distribution which may be obtained if radioisotopes are administered using a staggered injections method. In this work the method is tested on simple simulations of static PET acquisitions. Results: The authors’ experiments performed using Monte-Carlo simulations with static acquisitions demonstrate that the combined method provides better results (crosstalk errors decrease by up to 50%) than the positron-gamma DIPET method or staggered injections alone. Conclusions: The authors demonstrate that the authors’ new EM algorithm which combines information from triple coincidences with prompt gammas and staggered injections improves the accuracy of DIPET reconstructions for static acquisitions so they reach almost the benchmark level calculated for perfectly separated tracers. PMID:24506645
Differential correlation for sequencing data.
Siska, Charlotte; Kechris, Katerina
2017-01-19
Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
3D time-domain airborne EM modeling for an arbitrarily anisotropic earth
NASA Astrophysics Data System (ADS)
Yin, Changchun; Qi, Yanfu; Liu, Yunhe
2016-08-01
Time-domain airborne EM data is currently interpreted based on an isotropic model. Sometimes, it can be problematic when working in the region with distinct dipping stratifications. In this paper, we simulate the 3D time-domain airborne EM responses over an arbitrarily anisotropic earth with topography by edge-based finite-element method. Tetrahedral meshes are used to describe the abnormal bodies with complicated shapes. We further adopt the Backward Euler scheme to discretize the time-domain diffusion equation for electric field, obtaining an unconditionally stable linear equations system. We verify the accuracy of our 3D algorithm by comparing with 1D solutions for an anisotropic half-space. Then, we switch attentions to effects of anisotropic media on the strengths and the diffusion patterns of time-domain airborne EM responses. For numerical experiments, we adopt three typical anisotropic models: 1) an anisotropic anomalous body embedded in an isotropic half-space; 2) an isotropic anomalous body embedded in an anisotropic half-space; 3) an anisotropic half-space with topography. The modeling results show that the electric anisotropy of the subsurface media has big effects on both the strengths and the distribution patterns of time-domain airborne EM responses; this effect needs to be taken into account when interpreting ATEM data in areas with distinct anisotropy.
Newgard, Craig D.; Nelson, Maria J.; Kampp, Michael; Saha, Somnath; Zive, Dana; Schmidt, Terri; Daya, Mohamud; Jui, Jonathan; Wittwer, Lynn; Warden, Craig; Sahni, Ritu; Stevens, Mark; Gorman, Kyle; Koenig, Karl; Gubler, Dean; Rosteck, Pontine; Lee, Jan; Hedges, Jerris R.
2011-01-01
Background The decision-making processes used for out-of-hospital trauma triage and hospital selection in regionalized trauma systems remain poorly understood. The objective of this study was to understand the process of field triage decision-making in an established trauma system. Methods We used a mixed methods approach, including EMS records to quantify triage decisions and reasons for hospital selection in a population-based, injury cohort (2006 - 2008), plus a focused ethnography to understand EMS cognitive reasoning in making triage decisions. The study included 10 EMS agencies providing service to a 4-county regional trauma system with 3 trauma centers and 13 non-trauma hospitals. For qualitative analyses, we conducted field observation and interviews with 35 EMS field providers and a round-table discussion with 40 EMS management personnel to generate an empirical model of out-of-hospital decision making in trauma triage. Results 64,190 injured patients were evaluated by EMS, of whom 56,444 (88.0%) were transported to acute care hospitals and 9,637 (17.1% of transports) were field trauma activations. For non-trauma activations, patient/family preference and proximity accounted for 78% of destination decisions. EMS provider judgment was cited in 36% of field trauma activations and was the sole criterion in 23% of trauma patients. The empirical model demonstrated that trauma triage is driven primarily by EMS provider “gut feeling” (judgment) and relies heavily on provider experience, mechanism of injury, and early visual cues at the scene. Conclusions Provider cognitive reasoning for field trauma triage is more heuristic than algorithmic and driven primarily by provider judgment, rather than specific triage criteria. PMID:21817971
An implementation of the NiftyRec medical imaging library for PIXE-tomography reconstruction
NASA Astrophysics Data System (ADS)
Michelet, C.; Barberet, P.; Desbarats, P.; Giovannelli, J.-F.; Schou, C.; Chebil, I.; Delville, M.-H.; Gordillo, N.; Beasley, D. G.; Devès, G.; Moretto, P.; Seznec, H.
2017-08-01
A new development of the TomoRebuild software package is presented, including ;thick sample; correction for non linear X-ray production (NLXP) and X-ray absorption (XA). As in the previous versions, C++ programming with standard libraries was used for easier portability. Data reduction requires different steps which may be run either from a command line instruction or via a user friendly interface, developed as a portable Java plugin in ImageJ. All experimental and reconstruction parameters can be easily modified, either directly in the ASCII parameter files or via the ImageJ interface. A detailed user guide in English is provided. Sinograms and final reconstructed images are generated in usual binary formats that can be read by most public domain graphic softwares. New MLEM and OSEM methods are proposed, using optimized methods from the NiftyRec medical imaging library. An overview of the different medical imaging methods that have been used for ion beam microtomography applications is presented. In TomoRebuild, PIXET data reduction is performed for each chemical element independently and separately from STIMT, except for two steps where the fusion of STIMT and PIXET data is required: the calculation of the correction matrix and the normalization of PIXET data to obtain mass fraction distributions. Correction matrices for NLXP and XA are calculated using procedures extracted from the DISRA code, taking into account a large X-ray detection solid angle. For this, the 3D STIMT mass density distribution is used, considering a homogeneous global composition. A first example of PIXET experiment using two detectors is presented. Reconstruction results are compared and found in good agreement between different codes: FBP, NiftyRec MLEM and OSEM of the TomoRebuild software package, the original DISRA, its accelerated version provided in JPIXET and the accelerated MLEM version of JPIXET, with or without correction.
Machine-learning model observer for detection and localization tasks in clinical SPECT-MPI
NASA Astrophysics Data System (ADS)
Parages, Felipe M.; O'Connor, J. Michael; Pretorius, P. Hendrik; Brankov, Jovan G.
2016-03-01
In this work we propose a machine-learning MO based on Naive-Bayes classification (NB-MO) for the diagnostic tasks of detection, localization and assessment of perfusion defects in clinical SPECT Myocardial Perfusion Imaging (MPI), with the goal of evaluating several image reconstruction methods used in clinical practice. NB-MO uses image features extracted from polar-maps in order to predict lesion detection, localization and severity scores given by human readers in a series of 3D SPECT-MPI. The population used to tune (i.e. train) the NB-MO consisted of simulated SPECT-MPI cases - divided into normals or with lesions in variable sizes and locations - reconstructed using filtered backprojection (FBP) method. An ensemble of five human specialists (physicians) read a subset of simulated reconstructed images, and assigned a perfusion score for each region of the left-ventricle (LV). Polar-maps generated from the simulated volumes along with their corresponding human scores were used to train five NB-MOs (one per human reader), which are subsequently applied (i.e. tested) on three sets of clinical SPECT-MPI polar maps, in order to predict human detection and localization scores. The clinical "testing" population comprises healthy individuals and patients suffering from coronary artery disease (CAD) in three possible regions, namely: LAD, LcX and RCA. Each clinical case was reconstructed using three reconstruction strategies, namely: FBP with no SC (i.e. scatter compensation), OSEM with Triple Energy Window (TEW) SC method, and OSEM with Effective Source Scatter Estimation (ESSE) SC. Alternative Free-Response (AFROC) analysis of perfusion scores shows that NB-MO predicts a higher human performance for scatter-compensated reconstructions, in agreement with what has been reported in published literature. These results suggest that NB-MO has good potential to generalize well to reconstruction methods not used during training, even for reasonably dissimilar datasets (i.e. simulated vs. clinical).
2014-01-01
Background The Amberg-Schwandorf Algorithm for Primary Triage (ASAV) is a novel primary triage concept specifically for physician manned emergency medical services (EMS) systems. In this study, we determined the diagnostic reliability and the time requirements of ASAV triage. Methods Seven hundred eighty triage runs performed by 76 trained EMS providers of varying professional qualification were included into the study. Patients were simulated using human dummies with written vital signs sheets. Triage results were compared to a standard solution, which was developed in a modified Delphi procedure. Test performance parameters (e.g. sensitivity, specificity, likelihood ratios (LR), under-triage, and over-triage) were calculated. Time measurements comprised the complete triage and tagging process and included the time span for walking to the subsequent patient. Results were compared to those published for mSTaRT. Additionally, a subgroup analysis was performed for employment status (career/volunteer), team qualification, and previous triage training. Results For red patients, ASAV sensitivity was 87%, specificity 91%, positive LR 9.7, negative LR 0.139, over-triage 6%, and under-triage 10%. There were no significant differences related to mSTaRT. Per patient, ASAV triage required a mean of 35.4 sec (75th percentile 46 sec, 90th percentile 58 sec). Volunteers needed slightly more time to perform triage than EMS professionals. Previous mSTaRT training of the provider reduced under-triage significantly. There were significant differences in time requirements for triage depending on the expected triage category. Conclusions The ASAV is a specific concept for primary triage in physician governed EMS systems. It may detect red patients reliably. The test performance criteria are comparable to that of mSTaRT, whereas ASAV triage might be accomplished slightly faster. From the data, there was no evidence for a clinically significant reliability difference between typical staffing of mobile intensive care units, patient transport ambulances, or disaster response volunteers. Up to now, there is no clinical validation of either triage concept. Therefore, reality based evaluation studies are needed. PMID:25214310
Community Detection Algorithm Combining Stochastic Block Model and Attribute Data Clustering
NASA Astrophysics Data System (ADS)
Kataoka, Shun; Kobayashi, Takuto; Yasuda, Muneki; Tanaka, Kazuyuki
2016-11-01
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
SU-F-J-08: Quantitative SPECT Imaging of Ra-223 in a Phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, J; Hobbs, R; Sgouros, G
Purpose: Ra-223 therapy of prostate cancer bone metastases is being used to treat patients routinely. However, the absorbed dose distribution at the macroscopic and microscopic scales remains elusive, due to the inability to image the small activities injected. Accurate activity quantification through imaging is essential to calculate the absorbed dose in organs and sub-units in radiopharmaceutical therapy, enabling personalized absorbed dose-based treatment planning methodologies and more effective and optimal treatments. Methods: A 22 cm diameter by 20 cm long cylindrical phantom, containing a 3.52 cm diameter sphere, was used. A total of 2.01 MBq of Ra-223 was placed in themore » phantom with 177.6 kBq in the sphere. Images were acquired on a dual-head Siemens Symbia T16 gamma camera using three 20% full-width energy windows and centered at 84, 154, and 269 keV (120 projections, 360° rotation, 45 s per view). We have implemented reconstruction of Ra-223 SPECT projections using OS-EM (up to 20 iterations of 10 subsets) with compensation for attenuation using CT-based attenuation maps, collimator-detector response (CDR) (including septal penetration, scatter and Pb x-ray modeling), and scatter in the patient using the effective source scatter estimation (ESSE) method. The CDR functions and scatter kernels required for ESSE were computed using the SIMIND MC simulation code. All Ra-223 photon emissions as well as gamma rays from the daughters Rn-219 and Bi-211 were modeled. Results: The sensitivity of the camera in the three combined windows was 107.3 cps/MBq. The visual quality of the SPECT images was reasonably good and the activity in the sphere was 27% smaller than the true activity. This underestimation is likely due to partial volume effect. Conclusion: Absolute quantitative Ra-223 SPECT imaging is achievable with careful attention to compensate for image degrading factors and system calibration.« less
2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm
NASA Astrophysics Data System (ADS)
Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.
2017-08-01
Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
Control algorithms for dynamic windows for residential buildings
Firlag, Szymon; Yazdanian, Mehrangiz; Curcija, Charlie; ...
2015-09-30
This study analyzes the influence of control algorithms for dynamic windows on energy consumption, number of hours of retracted shades during daylight and shade operations. Five different control algorithms - heating/cooling, simple rules, perfect citizen, heat flow and predictive weather were developed and compared. The performance of a typical residential building was modeled with EnergyPlus. The program Widow was used to generate a Bi-Directional Distribution Function (BSDF) for two window configurations. The BSDF was exported to EnergyPlus using the IDF file format. The EMS feature in EnergyPlus was used to develop custom control algorithms. The calculations were made for fourmore » locations with diverse climate. The results showed that: (a) use of automated shading with proposed control algorithms can reduce the site energy in the range of 11.6-13.0%; in regard to source (primary) energy in the range of 20.1-21.6%, (b) the differences between algorithms in regard to energy savings are not high, (c) the differences between algorithms in regard to number of hours of retracted shades are visible, (e) the control algorithms have a strong influence on shade operation and oscillation of shade can occur, (d) additional energy consumption caused by motor, sensors and a small microprocessor in the analyzed case is very small.« less
Estimation of Item Parameters and the GEM Algorithm.
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
The models and procedures discussed in this paper are related to those presented in Bock and Aitkin (1981), where they considered the 2-parameter probit model and approximated a normally distributed prior distribution of abilities by a finite and discrete distribution. One purpose of this paper is to clarify the nature of the general EM (GEM)…
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2009-01-01
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12
ERIC Educational Resources Information Center
Antal, Tamás
2007-01-01
Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…
MicroPET/CT Colonoscopy in long-lived Min mouse using NM404
NASA Astrophysics Data System (ADS)
Christensen, Matthew B.; Halberg, Richard B.; Schutten, Melissa M.; Weichert, Jamey P.
2009-02-01
Colon cancer is a leading cause of death in the US, even though many cases are preventable if tumors are detected early. One technique to promote screening is Computed Tomography Colonography (CTC). NM404 is a second generation phospholipid ether analogue which has demonstrated selective uptake and prolonged retention in 43/43 types of malignant tumors but not inflammatory sites or premalignant lesions. The purpose of this experiment was to evaluate (SWR x B6 )F1.Min mice as a preclinical model to test MicroPET/CT dual modality virtual colonoscopy. Each animal was given an IV injection of 124I-NM404 (100 uCi) 24, 48 and 96 hours prior to scanning on a dedicated microPET/CT system. Forty million counts were histogrammed in 3D and reconstructed using an OSEM 2D algorithm. Immediately after PET acquisition, a 93 m volumetric CT was acquired at 80 kVp, 800 uA and 350 ms exposures. Following CT, the mouse was sacrificed. The entire intestinal tract was excised, washed, insufflated, and scanned ex vivo A total of eight tissue samples from the small intestine were harvested: 5 were benign adenomas, 2 were malignant adenocarcinomas, and 1 was a Peyer's patch (lymph tissue) . The sites of these samples were positioned on CT and PET images based on morphological cues and the distance from the anus. Only 1/8 samples showed tracer uptake. several hot spots in the microPET image were not chosen for histology. (SWR x B6)F1.Min mice develop benign and malignant tumors, making this animal model a strong candidate for future dual modality microPET/CT virtual colonography studies.
Task Equivalence for Model and Human-Observer Comparisons in SPECT Localization Studies
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2016-06-01
While mathematical model observers are intended for efficient assessment of medical imaging systems, their findings should be relevant for human observers as the primary clinical end users. We have investigated whether pursuing equivalence between the model and human-observer tasks can help ensure this goal. A localization receiver operating characteristic (LROC) study tested prostate lesion detection in simulated In-111 SPECT imaging with anthropomorphic phantoms. The test images were 2D slices extracted from reconstructed volumes. The iterative ordered sets expectation-maximization (OSEM) reconstruction algorithm was used with Gaussian postsmoothing. Variations in the number of iterations and the level of postfiltering defined the test strategies in the study. Human-observer performance was compared with that of a visual-search (VS) observer, a scanning channelized Hotelling observer, and a scanning channelized nonprewhitening (CNPW) observer. These model observers were applied with precise information about the target regions of interest (ROIs). ROI knowledge was a study variable for the human observers. In one study format, the humans read the SPECT image alone. With a dual-modality format, the SPECT image was presented alongside an anatomical image slice extracted from the density map of the phantom. Performance was scored by area under the LROC curve. The human observers performed significantly better with the dual-modality format, and correlation with the model observers was also improved. Given the human-observer data from the SPECT study format, the Pearson correlation coefficients for the model observers were 0.58 (VS), -0.12 (CH), and -0.23 (CNPW). The respective coefficients based on the human-observer data from the dual-modality study were 0.72, 0.27, and -0.11. These results point towards the continued development of the VS observer for enhancing task equivalence in model-observer studies.
Optimisation of quantitative lung SPECT applied to mild COPD: a software phantom simulation study.
Norberg, Pernilla; Olsson, Anna; Alm Carlsson, Gudrun; Sandborg, Michael; Gustafsson, Agnetha
2015-01-01
The amount of inhomogeneities in a (99m)Tc Technegas single-photon emission computed tomography (SPECT) lung image, caused by reduced ventilation in lung regions affected by chronic obstructive pulmonary disease (COPD), is correlated to disease advancement. A quantitative analysis method, the CVT method, measuring these inhomogeneities was proposed in earlier work. To detect mild COPD, which is a difficult task, optimised parameter values are needed. In this work, the CVT method was optimised with respect to the parameter values of acquisition, reconstruction and analysis. The ordered subset expectation maximisation (OSEM) algorithm was used for reconstructing the lung SPECT images. As a first step towards clinical application of the CVT method in detecting mild COPD, this study was based on simulated SPECT images of an advanced anthropomorphic lung software phantom including respiratory and cardiac motion, where the mild COPD lung had an overall ventilation reduction of 5%. The best separation between healthy and mild COPD lung images as determined using the CVT measure of ventilation inhomogeneity and 125 MBq (99m)Tc was obtained using a low-energy high-resolution collimator (LEHR) and a power 6 Butterworth post-filter with a cutoff frequency of 0.6 to 0.7 cm(-1). Sixty-four reconstruction updates and a small kernel size should be used when the whole lung is analysed, and for the reduced lung a greater number of updates and a larger kernel size are needed. A LEHR collimator and 125 (99m)Tc MBq together with an optimal combination of cutoff frequency, number of updates and kernel size, gave the best result. Suboptimal selections of either cutoff frequency, number of updates and kernel size will reduce the imaging system's ability to detect mild COPD in the lung phantom.
Species Tree Inference Using a Mixture Model.
Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens
2015-09-01
Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic analyses using gene tree parsimony. Bioinformatics 24(13):1540-1541). Our method is competitive with PHYLDOG in terms of accuracy and runs significantly faster and our method outperforms Duptree in accuracy. The analysis constituted by MixTreEM without DLRS may also be used for selecting the target species tree, yielding a fast and yet accurate algorithm for larger data sets. MixTreEM is freely available at http://prime.scilifelab.se/mixtreem/. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bigham, Blair L; Bull, Ellen; Morrison, Merideth; Burgess, Rob; Maher, Janet; Brooks, Steven C; Morrison, Laurie J
2011-01-01
Emergency medical services (EMS) personnel care for patients in challenging and dynamic environments that may contribute to an increased risk for adverse events. However, little is known about the risks to patient safety in the EMS setting. To address this knowledge gap, we conducted a systematic review of the literature, including nonrandomized, noncontrolled studies, conducted qualitative interviews of key informants, and, with the assistance of a pan-Canadian advisory board, hosted a 1-day summit of 52 experts in the field of EMS patient safety. The intent of the summit was to review available research, discuss the issues affecting prehospital patient safety, and discuss interventions that might improve the safety of the EMS industry. The primary objective was to define the strategic goals for improving patient safety in EMS. Participants represented all geographic regions of Canada and included administrators, educators, physicians, researchers, and patient safety experts. Data were collected through electronic voting and qualitative analysis of the discussions. The group reached consensus on nine recommendations to increase awareness, reduce adverse events, and suggest research and educational directions in EMS patient safety: increasing awareness of patient safety principles, improving adverse event reporting through creating nonpunitive reporting systems, supporting paramedic clinical decision making through improved research and education, policy changes, using flexible algorithms, adopting patient safety strategies from other disciplines, increasing funding for research in patient safety, salary support for paramedic researchers, and access to graduate training in prehospital research.
Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar
2016-05-04
Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in PSIR images (n = 49). The EWA algorithm was validated experimentally and in patient data with a low bias in both IR and PSIR LGE images. Thus, the use of EM and a weighted intensity as in the EWA algorithm, may serve as a clinical standard for the quantification of myocardial infarction in LGE CMR images. CHILL-MI: NCT01379261 . NCT01374321 .
Start/End Delays of Voiced and Unvoiced Speech Signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrnstein, A
Recent experiments using low power EM-radar like sensors (e.g, GEMs) have demonstrated a new method for measuring vocal fold activity and the onset times of voiced speech, as vocal fold contact begins to take place. Similarly the end time of a voiced speech segment can be measured. Secondly it appears that in most normal uses of American English speech, unvoiced-speech segments directly precede or directly follow voiced-speech segments. For many applications, it is useful to know typical duration times of these unvoiced speech segments. A corpus, assembled earlier of spoken ''Timit'' words, phrases, and sentences and recorded using simultaneously measuredmore » acoustic and EM-sensor glottal signals, from 16 male speakers, was used for this study. By inspecting the onset (or end) of unvoiced speech, using the acoustic signal, and the onset (or end) of voiced speech using the EM sensor signal, the average duration times for unvoiced segments preceding onset of vocalization were found to be 300ms, and for following segments, 500ms. An unvoiced speech period is then defined in time, first by using the onset of the EM-sensed glottal signal, as the onset-time marker for the voiced speech segment and end marker for the unvoiced segment. Then, by subtracting 300ms from the onset time mark of voicing, the unvoiced speech segment start time is found. Similarly, the times for a following unvoiced speech segment can be found. While data of this nature have proven to be useful for work in our laboratory, a great deal of additional work remains to validate such data for use with general populations of users. These procedures have been useful for applying optimal processing algorithms over time segments of unvoiced, voiced, and non-speech acoustic signals. For example, these data appear to be of use in speaker validation, in vocoding, and in denoising algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, Y., E-mail: h1312101@mailg.nc-toyama.ac.jp; Takada, E.; Fujisaki, A.
Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was appliedmore » using the EM algorithm and k-means algorithm.« less
Graphical Models for Ordinal Data
Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji
2014-01-01
A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
NASA Astrophysics Data System (ADS)
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Virtual reality triage training provides a viable solution for disaster-preparedness.
Andreatta, Pamela B; Maslowski, Eric; Petty, Sean; Shim, Woojin; Marsh, Michael; Hall, Theodore; Stern, Susan; Frankel, Jen
2010-08-01
The objective of this study was to compare the relative impact of two simulation-based methods for training emergency medicine (EM) residents in disaster triage using the Simple Triage and Rapid Treatment (START) algorithm, full-immersion virtual reality (VR), and standardized patient (SP) drill. Specifically, are there differences between the triage performances and posttest results of the two groups, and do both methods differentiate between learners of variable experience levels? Fifteen Postgraduate Year 1 (PGY1) to PGY4 EM residents were randomly assigned to two groups: VR or SP. In the VR group, the learners were effectively surrounded by a virtual mass disaster environment projected on four walls, ceiling, and floor and performed triage by interacting with virtual patients in avatar form. The second group performed likewise in a live disaster drill using SP victims. Setting and patient presentations were identical between the two modalities. Resident performance of triage during the drills and knowledge of the START triage algorithm pre/post drill completion were assessed. Analyses included descriptive statistics and measures of association (effect size). The mean pretest scores were similar between the SP and VR groups. There were no significant differences between the triage performances of the VR and SP groups, but the data showed an effect in favor of the SP group performance on the posttest. Virtual reality can provide a feasible alternative for training EM personnel in mass disaster triage, comparing favorably to SP drills. Virtual reality provides flexible, consistent, on-demand training options, using a stable, repeatable platform essential for the development of assessment protocols and performance standards.
Mechanisms for Diurnal Variability of Global Tropical Rainfall Observed from TRMM
NASA Technical Reports Server (NTRS)
Yang, Song; Smith, Eric A.
2004-01-01
The behavior and various controls of diurnal variability in tropical-subtropical rainfall are investigated using Tropical Rainfall Measuring Mission (TRMM) precipitation measurements retrieved from: (1) TRMM Microwave Imager (TMI), (2) Precipitation Radar (PR), and (3) TMI/PR Combined, standard level 2 algorithms for the 1998 annual cycle. Results show that the diurnal variability characteristics of precipitation are consistent for all three algorithms, providing assurance that TRMM retrievals are providing consistent estimates of rainfall variability. As anticipated, most ocean areas exhibit more rainfall at night, while over most land areas rainfall peaks during daytime ,however, various important exceptions are found. The dominant feature of the oceanic diurnal cycle is a rainfall maximum in late-evening/early-morning (LE-EM) hours, while over land the dominant maximum occurs in the mid- to late-afternoon (MLA). In conjunction with these maxima are pronounced seasonal variations of the diurnal amplitudes. Amplitude analysis shows that the diurnal pattern and its seasonal evolution are closely related to the rainfall accumulation pattern and its seasonal evolution. In addition, the horizontal distribution of diurnal variability indicates that for oceanic rainfall there is a secondary MLA maximum, co-existing with the LE-EM maximum, at latitudes dominated by large scale convergence and deep convection. Analogously, there is a preponderance for an LE-EM maximum over land, co-existing with the stronger MLA maximum, although it is not evident that this secondary continental feature is closely associated with the large scale circulation. The ocean results clearly indicate that rainfall diurnal variability associated with large scale convection is an integral part of the atmospheric general circulation.
ERIC Educational Resources Information Center
Lombardi, Allison; Seburn, Mary; Conley, David
2011-01-01
In this study, Don't Know/Not Applicable (DK/NA) responses on a measure of academic behaviors associated with college readiness for high school students were treated with: (a) casewise deletion, (b) scale inclusion at the lowest level, and (c) imputation using E/M algorithm. Significant differences in mean responses according to treatment…
ERIC Educational Resources Information Center
Tsutakawa, Robert K.; Lin, Hsin Ying
Item response curves for a set of binary responses are studied from a Bayesian viewpoint of estimating the item parameters. For the two-parameter logistic model with normally distributed ability, restricted bivariate beta priors are used to illustrate the computation of the posterior mode via the EM algorithm. The procedure is illustrated by data…
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…
Naishadham, Krishna; Piou, Jean E; Ren, Lingyun; Fathy, Aly E
2016-12-01
Ultra wideband (UWB) Doppler radar has many biomedical applications, including remote diagnosis of cardiovascular disease, triage and real-time personnel tracking in rescue missions. It uses narrow pulses to probe the human body and detect tiny cardiopulmonary movements by spectral analysis of the backscattered electromagnetic (EM) field. With the help of super-resolution spectral algorithms, UWB radar is capable of increased accuracy for estimating vital signs such as heart and respiration rates in adverse signal-to-noise conditions. A major challenge for biomedical radar systems is detecting the heartbeat of a subject with high accuracy, because of minute thorax motion (less than 0.5 mm) caused by the heartbeat. The problem becomes compounded by EM clutter and noise in the environment. In this paper, we introduce a new algorithm based on the state space method (SSM) for the extraction of cardiac and respiration rates from UWB radar measurements. SSM produces range-dependent system poles that can be classified parametrically with spectral peaks at the cardiac and respiratory frequencies. It is shown that SSM produces accurate estimates of the vital signs without producing harmonics and inter-modulation products that plague signal resolution in widely used FFT spectrograms.
Finite-difference modeling of the electroseismic logging in a fluid-saturated porous formation
NASA Astrophysics Data System (ADS)
Guan, Wei; Hu, Hengshan
2008-05-01
In a fluid-saturated porous medium, an electromagnetic (EM) wavefield induces an acoustic wavefield due to the electrokinetic effect. A potential geophysical application of this effect is electroseismic (ES) logging, in which the converted acoustic wavefield is received in a fluid-filled borehole to evaluate the parameters of the porous formation around the borehole. In this paper, a finite-difference scheme is proposed to model the ES logging responses to a vertical low frequency electric dipole along the borehole axis. The EM field excited by the electric dipole is calculated separately by finite-difference first, and is considered as a distributed exciting source term in a set of extended Biot's equations for the converted acoustic wavefield in the formation. This set of equations is solved by a modified finite-difference time-domain (FDTD) algorithm that allows for the calculation of dynamic permeability so that it is not restricted to low-frequency poroelastic wave problems. The perfectly matched layer (PML) technique without splitting the fields is applied to truncate the computational region. The simulated ES logging waveforms approximately agree with those obtained by the analytical method. The FDTD algorithm applies also to acoustic logging simulation in porous formations.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
Closed Loop Guidance Trade Study for Space Launch System Block-1B Vehicle
NASA Technical Reports Server (NTRS)
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The design of the next evolution of SLS, Block-1B, is well underway. The Block-1B vehicle is more capable overall than Block-1; however, the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) presents a challenge to the Powered Explicit Guidance (PEG) algorithm used by Block-1. To handle the long burn durations (on the order of 1000 seconds) of EUS missions, two algorithms were examined. An alternative algorithm, OPGUID, was introduced, while modifications were made to PEG. A trade study was conducted to select the guidance algorithm for future SLS vehicles. The chosen algorithm needs to support a wide variety of mission operations: ascent burns to LEO, apogee raise burns, trans-lunar injection burns, hyperbolic Earth departure burns, and contingency disposal burns using the Reaction Control System (RCS). Additionally, the algorithm must be able to respond to a single engine failure scenario. Each algorithm was scored based on pre-selected criteria, including insertion accuracy, algorithmic complexity and robustness, extensibility for potential future missions, and flight heritage. Monte Carlo analysis was used to select the final algorithm. This paper covers the design criteria, approach, and results of this trade study, showing impacts and considerations when adapting launch vehicle guidance algorithms to a broader breadth of in-space operations.
EMHP: an accurate automated hole masking algorithm for single-particle cryo-EM image processing.
Berndsen, Zachary; Bowman, Charles; Jang, Haerin; Ward, Andrew B
2017-12-01
The Electron Microscopy Hole Punch (EMHP) is a streamlined suite of tools for quick assessment, sorting and hole masking of electron micrographs. With recent advances in single-particle electron cryo-microscopy (cryo-EM) data processing allowing for the rapid determination of protein structures using a smaller computational footprint, we saw the need for a fast and simple tool for data pre-processing that could run independent of existing high-performance computing (HPC) infrastructures. EMHP provides a data preprocessing platform in a small package that requires minimal python dependencies to function. https://www.bitbucket.org/chazbot/emhp Apache 2.0 License. bowman@scripps.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Software Accelerates Computing Time for Complex Math
NASA Technical Reports Server (NTRS)
2014-01-01
Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.
2008-10-01
modeling operator and dobs is the observed data (details in Pasion 2007). Figure 42. Geometry of EM61HH-MK2 sensor. The transmitter and receiver...1979. Stochastic models, estimation, and control (Vol. 141). Pasion , L. R., 2007. Inversion of Time Domain Electromagnetic Data for the Detection of...Unexploded Ordnance. Ph.D. Thesis, The University of British Columbia. Pasion , L. R., Oldenburg, D. W., 2001. A Discrimination Algorithm for UXO
EM reconstruction of dual isotope PET using staggered injections and prompt gamma positron emitters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreyev, Andriy, E-mail: andriy.andreyev-1@philips.com; Sitek, Arkadiusz; Celler, Anna
2014-02-15
Purpose: The aim of dual isotope positron emission tomography (DIPET) is to create two separate images of two coinjected PET radiotracers. DIPET shortens the duration of the study, reduces patient discomfort, and produces perfectly coregistered images compared to the case when two radiotracers would be imaged independently (sequential PET studies). Reconstruction of data from such simultaneous acquisition of two PET radiotracers is difficult because positron decay of any isotope creates only 511 keV photons; therefore, the isotopes cannot be differentiated based on the detected energy. Methods: Recently, the authors have proposed a DIPET technique that uses a combination of radiotracermore » A which is a pure positron emitter (such as{sup 18}F or {sup 11}C) and radiotracer B in which positron decay is accompanied by the emission of a high-energy (HE) prompt gamma (such as {sup 38}K or {sup 60}Cu). Events that are detected as triple coincidences of HE gammas with the corresponding two 511 keV photons allow the authors to identify the lines-of-response (LORs) of isotope B. These LORs are used to separate the two intertwined distributions, using a dedicated image reconstruction algorithm. In this work the authors propose a new version of the DIPET EM-based reconstruction algorithm that allows the authors to include an additional, independent estimate of radiotracer A distribution which may be obtained if radioisotopes are administered using a staggered injections method. In this work the method is tested on simple simulations of static PET acquisitions. Results: The authors’ experiments performed using Monte-Carlo simulations with static acquisitions demonstrate that the combined method provides better results (crosstalk errors decrease by up to 50%) than the positron-gamma DIPET method or staggered injections alone. Conclusions: The authors demonstrate that the authors’ new EM algorithm which combines information from triple coincidences with prompt gammas and staggered injections improves the accuracy of DIPET reconstructions for static acquisitions so they reach almost the benchmark level calculated for perfectly separated tracers.« less
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
NASA Technical Reports Server (NTRS)
Kanevsky, Alex
2004-01-01
My goal is to develop and implement efficient, accurate, and robust Implicit-Explicit Runge-Kutta (IMEX RK) methods [9] for overcoming geometry-induced stiffness with applications to computational electromagnetics (CEM), computational fluid dynamics (CFD) and computational aeroacoustics (CAA). IMEX algorithms solve the non-stiff portions of the domain using explicit methods, and isolate and solve the more expensive stiff portions using implicit methods. Current algorithms in CEM can only simulate purely harmonic (up to lOGHz plane wave) EM scattering by fighter aircraft, which are assumed to be pure metallic shells, and cannot handle the inclusion of coatings, penetration into and radiation out of the aircraft. Efficient MEX RK methods could potentially increase current CEM capabilities by 1-2 orders of magnitude, allowing scientists and engineers to attack more challenging and realistic problems.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Negotiating Multicollinearity with Spike-and-Slab Priors.
Ročková, Veronika; George, Edward I
2014-08-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.
Sun, J
1995-09-01
In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.
Li, J; Guo, L-X; Zeng, H; Han, X-B
2009-06-01
A message-passing-interface (MPI)-based parallel finite-difference time-domain (FDTD) algorithm for the electromagnetic scattering from a 1-D randomly rough sea surface is presented. The uniaxial perfectly matched layer (UPML) medium is adopted for truncation of FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different processors is illustrated for one sea surface realization, and the computation time of the parallel FDTD algorithm is dramatically reduced compared to a single-process implementation. Finally, some numerical results are shown, including the backscattering characteristics of sea surface for different polarization and the bistatic scattering from a sea surface with large incident angle and large wind speed.
NASA Astrophysics Data System (ADS)
Adams, J. W.; Ondrejka, A. R.; Medley, H. W.
1987-11-01
A method of measuring the natural resonant frequencies of a structure is described. The measurement involves irradiating this structure, in this case a helicopter, with an impulsive electromagnetic (EM) field and receiving the echo reflected from the helicopter. Resonances are identified by using a mathematical algorithm based on Prony's method to operate on the digitized reflected signal. The measurement system consists of special TEM horns, pulse generators, a time-domain system, and Prony's algorithm. The frequency range covered is 5 megahertz to 250 megahertz. This range is determined by antenna and circuit characteristics. The measurement system is demonstrated, and measured data from several different helicopters are presented in different forms. These different forms are needed to determine which of the resonant frequencies are real and which are false. The false frequencies are byproducts of Prony's algorithm.
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
Sensitivity of PZT Impedance Sensors for Damage Detection of Concrete Structures.
Yang, Yaowen; Hu, Yuhang; Lu, Yong
2008-01-21
Piezoelectric ceramic Lead Zirconate Titanate (PZT) based electro-mechanicalimpedance (EMI) technique for structural health monitoring (SHM) has been successfullyapplied to various engineering systems. However, fundamental research work on thesensitivity of the PZT impedance sensors for damage detection is still in need. In thetraditional EMI method, the PZT electro-mechanical (EM) admittance (inverse of theimpedance) is used as damage indicator, which is difficult to specify the effect of damage onstructural properties. This paper uses the structural mechanical impedance (SMI) extractedfrom the PZT EM admittance signature as the damage indicator. A comparison study on thesensitivity of the EM admittance and the structural mechanical impedance to the damages ina concrete structure is conducted. Results show that the SMI is more sensitive to the damagethan the EM admittance thus a better indicator for damage detection. Furthermore, this paperproposes a dynamic system consisting of a number of single-degree-of-freedom elementswith mass, spring and damper components to model the SMI. A genetic algorithm isemployed to search for the optimal value of the unknown parameters in the dynamic system.An experiment is carried out on a two-storey concrete frame subjected to base vibrations thatsimulate earthquake. A number of PZT sensors are regularly arrayed and bonded to the framestructure to acquire PZT EM admittance signatures. The relationship between the damageindex and the distance of the PZT sensor from the damage is studied. Consequently, thesensitivity of the PZT sensors is discussed and their sensing region in concrete is derived.
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Oliver, Emerson; Smith, Austin
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GN&C software from the set of healthy measurements. This paper provides an overview of the algorithms used for both fault-detection and measurement down selection.
Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.
Greenberg, Ido; Shkolnisky, Yoel
2017-11-01
We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less
Even Shallower Exploration with Airborne Electromagnetics
NASA Astrophysics Data System (ADS)
Auken, E.; Christiansen, A. V.; Kirkegaard, C.; Nyboe, N. S.; Sørensen, K.
2015-12-01
Airborne electromagnetics (EM) is in many ways undergoing the same type rapid technological development as seen in the telecommunication industry. These developments are driven by a steadily increasing demand for exploration of minerals, groundwater and geotechnical targets. The latter two areas demand shallow and accurate resolution of the near surface geology in terms of both resistivity and spatial delineation of the sedimentary layers. Airborne EM systems measure the grounds electromagnetic response when subject to either a continuous discrete sinusoidal transmitter signal (frequency domain) or by measuring the decay of currents induced in the ground by rapid transmission of transient pulses (time domain). In the last decade almost all new developments of both instrument hardware and data processing techniques has focused around time domain systems. Here we present a concept for measuring the time domain response even before the transient transmitter current has been turned off. Our approach relies on a combination of new instrument hardware and novel modeling algorithms. The newly developed hardware allows for measuring the instruments complete transfer function which is convolved with the synthetic earth response in the inversion algorithm. The effect is that earth response data measured while the transmitter current is turned off can be included in the inversion, significantly increasing the amount of available information. We demonstrate the technique using both synthetic and field data. The synthetic examples provide insight on the physics during the turn off process and the field examples document the robustness of the method. Geological near surface structures can now be resolved to a degree that is unprecedented to the best of our knowledge, making airborne EM even more attractive and cost-effective for exploration of water and minerals that are crucial for the function of our societies.
NASA Astrophysics Data System (ADS)
Jamie, Majid
2016-11-01
Singh and Mogi (2003) presented a forward modeling (FWD) program, coded in FORTRAN 77 called "EMLCLLER", which is capable of computing the frequency-domain electromagnetic (EM) response of a large circular loop, in terms of vertical magnetic component (Hz), over 1D layer earth models; computations at this program could be performed by assuming variable transmitter-receiver configurations and incorporating both conduction and displacement currents into computations. Integral equations at this program are computed through digital linear filters based on the Hankel transforms together with analytic solutions based on hyper-geometric functions. Despite capabilities of EMLCLLER, there are some mistakes at this program that make its FWD results unreliable. The mistakes in EMLCLLER arise in using wrong algorithm for computing reflection coefficient of the EM wave in TE-mode (rTE), and using flawed algorithms for computing phase and normalized phase values relating to Hz; in this paper corrected form of these mistakes are presented. Moreover, in order to illustrate how these mistakes can affect FWD results, EMLCLLER and corrected version of this program presented in this paper titled "EMLCLLER_Corr" are conducted on different two- and three-layered earth models; afterwards their FWD results in terms of real and imaginary parts of Hz, its normalized amplitude, and the corresponding normalized phase curves are plotted versus frequency and compared to each other. In addition, in Singh and Mogi (2003) extra derivations for computing radial component of the magnetic field (Hr) and angular component of the electric field (Eϕ) are also presented where the numerical solution presented for Hr is incorrect; in this paper the correct numerical solution for this derivation is also presented.
Processing grounded-wire TEM signal in time-frequency-pseudo-seismic domain: A new paradigm
NASA Astrophysics Data System (ADS)
Khan, M. Y.; Xue, G. Q.; Chen, W.; Huasen, Z.
2017-12-01
Grounded-wire TEM has received great attention in mineral, hydrocarbon and hydrogeological investigations for the last several years. Conventionally, TEM soundings have been presented as apparent resistivity curves as function of time. With development of sophisticated computational algorithms, it became possible to extract more realistic geoelectric information by applying inversion programs to 1-D & 3-D problems. Here, we analyze grounded-wire TEM data by carrying out analysis in time, frequency and pseudo-seismic domain supported by borehole information. At first, H, K, A & Q type geoelectric models are processed using a proven inversion program (1-D Occam inversion). Second, time-to-frequency transformation is conducted from TEM ρa(t) curves to magneto telluric MT ρa(f) curves for the same models based on all-time apparent resistivity curves. Third, 1-D Bostick's algorithm was applied to the transformed resistivity. Finally, EM diffusion field is transformed into propagating wave field obeying the standard wave equation using wavelet transformation technique and constructed pseudo-seismic section. The transformed seismic-like wave indicates that some reflection and refraction phenomena appear when the EM wave field interacts with geoelectric interface at different depth intervals due to contrast in resistivity. The resolution of the transformed TEM data is significantly improved in comparison to apparent resistivity plots. A case study illustrates the successful hydrogeophysical application of proposed approach in recovering water-filled mined-out area in a coal field located in Ye county, Henan province, China. The results support the introduction of pseudo-seismic imaging technology in short-offset version of TEM which can also be an useful aid if integrated with seismic reflection technique to explore possibilities for high resolution EM imaging in future.
NASA Astrophysics Data System (ADS)
Tandon, K.; Egbert, G.; Siripunvaraporn, W.
2003-12-01
We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.
Numerical Computation of Homogeneous Slope Stability
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS). PMID:25784927
Numerical computation of homogeneous slope stability.
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).
Detection of the power lines in UAV remote sensed images using spectral-spatial methods.
Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham
2018-01-15
In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J
2014-06-01
Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, L; Duke University Medical Center, Durham, NC; Fudan University Shanghai Cancer Center, Shanghai
Purpose: To investigate prostate imaging onboard radiation therapy machines using a novel robotic, 49-pinhole Single Photon Emission Computed Tomography (SPECT) system. Methods: Computer-simulation studies were performed for region-of-interest (ROI) imaging using a 49-pinhole SPECT collimator and for broad cross-section imaging using a parallel-hole SPECT collimator. A male XCAT phantom was computersimulated in supine position with one 12mm-diameter tumor added in the prostate. A treatment couch was added to the phantom. Four-minute detector trajectories for imaging a 7cm-diameter-sphere ROI encompassing the tumor were investigated with different parameters, including pinhole focal length, pinhole diameter and trajectory starting angle. Pseudo-random Poisson noise wasmore » included in the simulated projection data, and SPECT images were reconstructed by OSEM with 4 subsets and up to 10 iterations. Images were evaluated by visual inspection, profiles, and Root-Mean- Square-Error (RMSE). Results: The tumor was well visualized above background by the 49-pinhole SPECT system with different pinhole parameters while it was not visible with parallel-hole SPECT imaging. Minimum RMSEs were 0.30 for 49-pinhole imaging and 0.41 for parallelhole imaging. For parallel-hole imaging, the detector trajectory from rightto- left yielded slightly lower RMSEs than that from posterior to anterior. For 49-pinhole imaging, near-minimum RMSEs were maintained over a broader range of OSEM iterations with a 5mm pinhole diameter and 21cm focal length versus a 2mm diameter pinhole and 18cm focal length. The detector with 21cm pinhole focal length had the shortest rotation radius averaged over the trajectory. Conclusion: On-board functional and molecular prostate imaging may be feasible in 4-minute scan times by robotic SPECT. A 49-pinhole SPECT system could improve such imaging as compared to broadcross-section parallel-hole collimated SPECT imaging. Multi-pinhole imaging can be improved by considering pinhole focal length, pinhole diameter, and trajectory starting angle. The project is supported by the NIH grant 5R21-CA156390.« less
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Smart Interpretation - Application of Machine Learning in Geological Interpretation of AEM Data
NASA Astrophysics Data System (ADS)
Bach, T.; Gulbrandsen, M. L.; Jacobsen, R.; Pallesen, T. M.; Jørgensen, F.; Høyer, A. S.; Hansen, T. M.
2015-12-01
When using airborne geophysical measurements in e.g. groundwater mapping, an overwhelming amount of data is collected. Increasingly larger survey areas, denser data collection and limited resources, combines to an increasing problem of building geological models that use all the available data in a manner that is consistent with the geologists knowledge about the geology of the survey area. In the ERGO project, funded by The Danish National Advanced Technology Foundation, we address this problem, by developing new, usable tools, enabling the geologist utilize her geological knowledge directly in the interpretation of the AEM data, and thereby handle the large amount of data, In the project we have developed the mathematical basis for capturing geological expertise in a statistical model. Based on this, we have implemented new algorithms that have been operationalized and embedded in user friendly software. In this software, the machine learning algorithm, Smart Interpretation, enables the geologist to use the system as an assistant in the geological modelling process. As the software 'learns' the geology from the geologist, the system suggest new modelling features in the data. In this presentation we demonstrate the application of the results from the ERGO project, including the proposed modelling workflow utilized on a variety of data examples.
3D electromagnetic modelling of a TTI medium and TTI effects in inversion
NASA Astrophysics Data System (ADS)
Jaysaval, Piyoosh; Shantsev, Daniil; de la Kethulle de Ryhove, Sébastien
2016-04-01
We present a numerical algorithm for 3D electromagnetic (EM) forward modelling in conducting media with general electric anisotropy. The algorithm is based on the finite-difference discretization of frequency-domain Maxwell's equations on a Lebedev grid, in which all components of the electric field are collocated but half a spatial step staggered with respect to the magnetic field components, which also are collocated. This leads to a system of linear equations that is solved using a stabilized biconjugate gradient method with a multigrid preconditioner. We validate the accuracy of the numerical results for layered and 3D tilted transverse isotropic (TTI) earth models representing typical scenarios used in the marine controlled-source EM method. It is then demonstrated that not taking into account the full anisotropy of the conductivity tensor can lead to misleading inversion results. For simulation data corresponding to a 3D model with a TTI anticlinal structure, a standard vertical transverse isotropic inversion is not able to image a resistor, while for a 3D model with a TTI synclinal structure the inversion produces a false resistive anomaly. If inversion uses the proposed forward solver that can handle TTI anisotropy, it produces resistivity images consistent with the true models.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less
Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.
Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús
2008-10-01
Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
NASA Astrophysics Data System (ADS)
Ninos, K.; Georgiadis, P.; Cavouras, D.; Nomicos, C.
2010-05-01
This study presents the design and development of a mobile wireless platform to be used for monitoring and analysis of seismic events and related electromagnetic (EM) signals, employing Personal Digital Assistants (PDAs). A prototype custom-developed application was deployed on a 3G enabled PDA that could connect to the FTP server of the Institute of Geodynamics of the National Observatory of Athens and receive and display EM signals at 4 receiver frequencies (3 KHz (E-W, N-S), 10 KHz (E-W, N-S), 41 MHz and 46 MHz). Signals may originate from any one of the 16 field-stations located around the Greek territory. Employing continuous recordings of EM signals gathered from January 2003 till December 2007, a Support Vector Machines (SVM)-based classification system was designed to distinguish EM precursor signals within noisy background. EM-signals corresponding to recordings preceding major seismic events (Ms≥5R) were segmented, by an experienced scientist, and five features (mean, variance, skewness, kurtosis, and a wavelet based feature), derived from the EM-signals were calculated. These features were used to train the SVM-based classification scheme. The performance of the system was evaluated by the exhaustive search and leave-one-out methods giving 87.2% overall classification accuracy, in correctly identifying EM precursor signals within noisy background employing all calculated features. Due to the insufficient processing power of the PDAs, this task was performed on a typical desktop computer. This optimal trained context of the SVM classifier was then integrated in the PDA based application rendering the platform capable to discriminate between EM precursor signals and noise. System's efficiency was evaluated by an expert who reviewed 1/ multiple EM-signals, up to 18 days prior to corresponding past seismic events, and 2/ the possible EM-activity of a specific region employing the trained SVM classifier. Additionally, the proposed architecture can form a base platform for a future integrated system that will incorporate services such as notifications for field station power failures, disruption of data flow, occurring SEs, and even other types of measurement and analysis processes such as the integration of a special analysis algorithm based on the ratio of short term to long term signal average.
NASA Astrophysics Data System (ADS)
Hoppmann, Mario; Hunkeler, Priska A.; Hendricks, Stefan; Kalscheuer, Thomas; Gerdes, Rüdiger
2016-04-01
In Antarctica, ice crystals (platelets) form and grow in supercooled waters below ice shelves. These platelets rise, accumulate beneath nearby sea ice, and subsequently form a several meter thick, porous sub-ice platelet layer. This special ice type is a unique habitat, influences sea-ice mass and energy balance, and its volume can be interpreted as an indicator of the health of an ice shelf. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In the present study, we applied a lateral constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the ice-shelf influenced fast-ice regime of Atka Bay, eastern Weddell Sea. We adapted the inversion algorithm to incorporate a sensor specific signal bias, and confirmed the reliability of the algorithm by performing a sensitivity study using synthetic data. We inverted the field data for sea-ice and platelet-layer thickness and electrical conductivity, and calculated ice-volume fractions within the platelet layer using Archie's Law. The thickness results agreed well with drillhole validation datasets within the uncertainty range, and the ice-volume fraction yielded results comparable to other studies. Both parameters together enable an estimation of the total ice volume within the platelet layer, which was found to be comparable to the volume of landfast sea ice in this region, and corresponded to more than a quarter of the annual basal melt volume of the nearby Ekström Ice Shelf. Our findings show that multi-frequency EM induction sounding is a suitable approach to efficiently map sea-ice and platelet-layer properties, with important implications for research into ocean/ice-shelf/sea-ice interactions. However, a successful application of this technique requires a break with traditional EM sensor calibration strategies due to the need of absolute calibration with respect to a physical forward model.
Multiple sclerosis lesion segmentation using an automatic multimodal graph cuts.
García-Lorenzo, Daniel; Lecoeur, Jeremy; Arnold, Douglas L; Collins, D Louis; Barillot, Christian
2009-01-01
Graph Cuts have been shown as a powerful interactive segmentation technique in several medical domains. We propose to automate the Graph Cuts in order to automatically segment Multiple Sclerosis (MS) lesions in MRI. We replace the manual interaction with a robust EM-based approach in order to discriminate between MS lesions and the Normal Appearing Brain Tissues (NABT). Evaluation is performed in synthetic and real images showing good agreement between the automatic segmentation and the target segmentation. We compare our algorithm with the state of the art techniques and with several manual segmentations. An advantage of our algorithm over previously published ones is the possibility to semi-automatically improve the segmentation due to the Graph Cuts interactive feature.
Uchida, Y.; Takada, E.; Fujisaki, A.; Isobe, M.; Shinohara, K.; Tomita, H.; Kawarabayashi, J.; Iguchi, T.
2014-01-01
Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was applied using the EM algorithm and k-means algorithm. PMID:25430297
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, AL; Bhagwat, MS; Buzurovic, I
Purpose: To investigate the use of a system using EM tracking, postprocessing and error-detection algorithms for measuring brachytherapy catheter locations and for detecting errors and resolving uncertainties in treatment-planning catheter digitization. Methods: An EM tracker was used to localize 13 catheters in a clinical surface applicator (A) and 15 catheters inserted into a phantom (B). Two pairs of catheters in (B) crossed paths at a distance <2 mm, producing an undistinguishable catheter artifact in that location. EM data was post-processed for noise reduction and reformatted to provide the dwell location configuration. CT-based digitization was automatically extracted from the brachytherapy planmore » DICOM files (CT). EM dwell digitization error was characterized in terms of the average and maximum distance between corresponding EM and CT dwells per catheter. The error detection rate (detected errors / all errors) was calculated for 3 types of errors: swap of two catheter numbers; incorrect catheter number identification superior to the closest position between two catheters (mix); and catheter-tip shift. Results: The averages ± 1 standard deviation of the average and maximum registration error per catheter were 1.9±0.7 mm and 3.0±1.1 mm for (A) and 1.6±0.6 mm and 2.7±0.8 mm for (B). The error detection rate was 100% (A and B) for swap errors, mix errors, and shift >4.5 mm (A) and >5.5 mm (B); errors were detected for shifts on average >2.0 mm (A) and >2.4 mm (B). Both mix errors associated with undistinguishable catheter artifacts were detected and at least one of the involved catheters was identified. Conclusion: We demonstrated the use of an EM tracking system for localization of brachytherapy catheters, detection of digitization errors and resolution of undistinguishable catheter artifacts. Automatic digitization may be possible with a registration between the imaging and the EM frame of reference. Research funded by the Kaye Family Award 2012.« less
NASA Astrophysics Data System (ADS)
Ge, J.; Everett, M. E.; Weiss, C. J.
2012-12-01
A 2.5D finite difference (FD) frequency-domain modeling algorithm based on the theory of fractional diffusion of electromagnetic (EM) fields generated by a loop source lying above a fractured geological medium is addressed in this paper. The presence of fractures in the subsurface, usually containing highly conductive pore fluids, gives rise to spatially hierarchical flow paths of induced EM eddy currents. The diffusion of EM eddy currents in such formations is anomalous, generalizing the classical Gaussian process described by the conventional Maxwell equations. Based on the continuous time random walk (CTRW) theory, the diffusion of EM eddy currents in a rough medium is governed by the fractional Maxwell equations. Here, we model the EM response of a 2D subsurface containing fractured zones, with a 3D loop source, which results the so-called 2.5D model geometry. The governing equation in the frequency domain is converted using Fourier transform into k domain along the strike direction (along which the model conductivity doesn't vary). The resulting equation system is solved by the multifrontal massively parallel solver (MUMPS). The data obtained is then converted back to spatial domain and the time domain. We find excellent agreement between the FD and analytic solutions for a rough halfspace model. Then FD solutions are calculated for a 2D fault zone model with variable conductivity and roughness. We compare the results with responses from several classical models and explore the relationship between the roughness and the spatial density of the fracture distribution.
Sensitivity of PZT Impedance Sensors for Damage Detection of Concrete Structures
Yang, Yaowen; Hu, Yuhang; Lu, Yong
2008-01-01
Piezoelectric ceramic Lead Zirconate Titanate (PZT) based electro-mechanical impedance (EMI) technique for structural health monitoring (SHM) has been successfully applied to various engineering systems. However, fundamental research work on the sensitivity of the PZT impedance sensors for damage detection is still in need. In the traditional EMI method, the PZT electro-mechanical (EM) admittance (inverse of the impedance) is used as damage indicator, which is difficult to specify the effect of damage on structural properties. This paper uses the structural mechanical impedance (SMI) extracted from the PZT EM admittance signature as the damage indicator. A comparison study on the sensitivity of the EM admittance and the structural mechanical impedance to the damages in a concrete structure is conducted. Results show that the SMI is more sensitive to the damage than the EM admittance thus a better indicator for damage detection. Furthermore, this paper proposes a dynamic system consisting of a number of single-degree-of-freedom elements with mass, spring and damper components to model the SMI. A genetic algorithm is employed to search for the optimal value of the unknown parameters in the dynamic system. An experiment is carried out on a two-storey concrete frame subjected to base vibrations that simulate earthquake. A number of PZT sensors are regularly arrayed and bonded to the frame structure to acquire PZT EM admittance signatures. The relationship between the damage index and the distance of the PZT sensor from the damage is studied. Consequently, the sensitivity of the PZT sensors is discussed and their sensing region in concrete is derived. PMID:27879711
NASA Astrophysics Data System (ADS)
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
Application of Dynamic Logic Algorithm to Inverse Scattering Problems Related to Plasma Diagnostics
NASA Astrophysics Data System (ADS)
Perlovsky, L.; Deming, R. W.; Sotnikov, V.
2010-11-01
In plasma diagnostics scattering of electromagnetic waves is widely used for identification of density and wave field perturbations. In the present work we use a powerful mathematical approach, dynamic logic (DL), to identify the spectra of scattered electromagnetic (EM) waves produced by the interaction of the incident EM wave with a Langmuir soliton in the presence of noise. The problem is especially difficult since the spectral amplitudes of the noise pattern are comparable with the amplitudes of the scattered waves. In the past DL has been applied to a number of complex problems in artificial intelligence, pattern recognition, and signal processing, resulting in revolutionary improvements. Here we demonstrate its application to plasma diagnostic problems. [4pt] Perlovsky, L.I., 2001. Neural Networks and Intellect: using model-based concepts. Oxford University Press, New York, NY.
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
EM Propagation & Atmospheric Effects Assessment
2008-09-30
The split-step Fourier parabolic equation ( SSPE ) algorithm provides the complex amplitude and phase (group delay) of the continuous wave (CW) signal...the APM is based on the SSPE , we are implementing the more efficient Fourier synthesis technique to determine the transfer function. To this end a...needed in order to sample H(f) via the SSPE , and indeed with the proper parameters chosen, the two pulses can be resolved in the time window shown in
Baer, Atar; Elbert, Yevgeniy; Burkom, Howard S; Holtry, Rekha; Lombardo, Joseph S; Duchin, Jeffrey S
2011-03-01
We evaluated emergency department (ED) data, emergency medical services (EMS) data, and public utilities data for describing an outbreak of carbon monoxide (CO) poisoning following a windstorm. Syndromic ED data were matched against previously collected chart abstraction data. We ran detection algorithms on selected time series derived from all 3 data sources to identify health events associated with the CO poisoning outbreak. We used spatial and spatiotemporal scan statistics to identify geographic areas that were most heavily affected by the CO poisoning event. Of the 241 CO cases confirmed by chart review, 190 (78.8%) were identified in the syndromic surveillance data as exact matches. Records from the ED and EMS data detected an increase in CO-consistent syndromes after the storm. The ED data identified significant clusters of CO-consistent syndromes, including zip codes that had widespread power outages. Weak temporal gastrointestinal (GI) signals, possibly resulting from ingestion of food spoiled by lack of refrigeration, were detected in the ED data but not in the EMS data. Spatial clustering of GI-based groupings in the ED data was not detected. Data from this evaluation support the value of ED data for surveillance after natural disasters. Enhanced EMS data may be useful for monitoring a CO poisoning event, if these data are available to the health department promptly. ©2011 American Medical Association. All rights reserved.
DNA motif alignment by evolving a population of Markov chains.
Bi, Chengpeng
2009-01-30
Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.
Large Scale Frequent Pattern Mining using MPI One-Sided Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Agarwal, Khushbu
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less
Negotiating Multicollinearity with Spike-and-Slab Priors
Ročková, Veronika
2014-01-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout. PMID:25419004
Data Analysis with Graphical Models: Software Tools
NASA Technical Reports Server (NTRS)
Buntine, Wray L.
1994-01-01
Probabilistic graphical models (directed and undirected Markov fields, and combined in chain graphs) are used widely in expert systems, image processing and other areas as a framework for representing and reasoning with probabilities. They come with corresponding algorithms for performing probabilistic inference. This paper discusses an extension to these models by Spiegelhalter and Gilks, plates, used to graphically model the notion of a sample. This offers a graphical specification language for representing data analysis problems. When combined with general methods for statistical inference, this also offers a unifying framework for prototyping and/or generating data analysis algorithms from graphical specifications. This paper outlines the framework and then presents some basic tools for the task: a graphical version of the Pitman-Koopman Theorem for the exponential family, problem decomposition, and the calculation of exact Bayes factors. Other tools already developed, such as automatic differentiation, Gibbs sampling, and use of the EM algorithm, make this a broad basis for the generation of data analysis software.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
NASA Astrophysics Data System (ADS)
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2012-02-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
High-resolution imaging of the large non-human primate brain using microPET: a feasibility study
NASA Astrophysics Data System (ADS)
Naidoo-Variawa, S.; Hey-Cunningham, A. J.; Lehnert, W.; Kench, P. L.; Kassiou, M.; Banati, R.; Meikle, S. R.
2007-11-01
The neuroanatomy and physiology of the baboon brain closely resembles that of the human brain and is well suited for evaluating promising new radioligands in non-human primates by PET and SPECT prior to their use in humans. These studies are commonly performed on clinical scanners with 5 mm spatial resolution at best, resulting in sub-optimal images for quantitative analysis. This study assessed the feasibility of using a microPET animal scanner to image the brains of large non-human primates, i.e. papio hamadryas (baboon) at high resolution. Factors affecting image accuracy, including scatter, attenuation and spatial resolution, were measured under conditions approximating a baboon brain and using different reconstruction strategies. Scatter fraction measured 32% at the centre of a 10 cm diameter phantom. Scatter correction increased image contrast by up to 21% but reduced the signal-to-noise ratio. Volume resolution was superior and more uniform using maximum a posteriori (MAP) reconstructed images (3.2-3.6 mm3 FWHM from centre to 4 cm offset) compared to both 3D ordered subsets expectation maximization (OSEM) (5.6-8.3 mm3) and 3D reprojection (3DRP) (5.9-9.1 mm3). A pilot 18F-2-fluoro-2-deoxy-d-glucose ([18F]FDG) scan was performed on a healthy female adult baboon. The pilot study demonstrated the ability to adequately resolve cortical and sub-cortical grey matter structures in the baboon brain and improved contrast when images were corrected for attenuation and scatter and reconstructed by MAP. We conclude that high resolution imaging of the baboon brain with microPET is feasible with appropriate choices of reconstruction strategy and corrections for degrading physical effects. Further work to develop suitable correction algorithms for high-resolution large primate imaging is warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maneru, F; Gracia, M; Gallardo, N
2015-06-15
Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less
NASA Astrophysics Data System (ADS)
Takenaka, H.; Teruyuki, N.; Nakajima, T. Y.; Higurashi, A.; Hashimoto, M.; Suzuki, K.; Uchida, J.; Nagao, T. M.; Shi, C.; Inoue, T.
2017-12-01
It is important to estimate the earth's radiation budget accurately for understanding of climate. Clouds can cool the Earth by reflecting solar radiation but also maintain warmth by absorbing and emitting terrestrial radiation. similarly aerosols also have an effect on radiation budget by absorption and scattering of Solar radiation. In this study, we developed the high speed and accurate algorithm for shortwave (SW) radiation budget and it's applied to geostationary satellite for rapid analysis. It enabled highly accurate monitoring of solar radiation and photo voltaic (PV) power generation. Next step, we try to update the algorithm for retrieval of Aerosols and Clouds. It indicates the accurate atmospheric parameters for estimation of solar radiation. (This research was supported in part by CREST/EMS).
Item-Based Top-N Recommendation Algorithms
2003-01-20
basket of items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Finally, even though the...not discriminate between items that are present in frequent itemsets and items that are not, while still maintaining the computational advantages of...453219 0.02% 7.74 ccard 42629 68793 398619 0.01% 9.35 ecommerce 6667 17491 91222 0.08% 13.68 em 8002 1648 769311 5.83% 96.14 ml 943 1682 100000 6.31
2008-09-30
developing methods to simultaneously track multiple vocalizing marine mammals, we hope to contribute to the fields of marine mammal bioacoustics, ecology ...mammals, we hope to contribute to the fields of marine mammal bioacoustics, ecology , and anthropogenic impact mitigation. 15. SUBJECT TERMS 16. SECURITY...N00014-05-1-0074 (OA Graduate Traineeship for E-M Nosal) LONG-TERM GOALS The long-term goal of our research is to develop algorithms that use widely
[Imputation methods for missing data in educational diagnostic evaluation].
Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José
2012-02-01
In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
Prototype-Incorporated Emotional Neural Network.
Oyedotun, Oyebade K; Khashman, Adnan
2017-08-15
Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ''engineering'' prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as ``prototype-incorporated EmNN''. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.
Robust EM Continual Reassessment Method in Oncology Dose Finding
Yuan, Ying; Yin, Guosheng
2012-01-01
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Metasurface Salisbury screen: achieving ultra-wideband microwave absorption.
Zhou, Ziheng; Chen, Ke; Zhao, Junming; Chen, Ping; Jiang, Tian; Zhu, Bo; Feng, Yijun; Li, Yue
2017-11-27
The metasurfaces have recently been demonstrated to provide full control of the phase responses of electromagnetic (EM) wave scattering over subwavelength scales, enabling a wide range of practical applications. Here, we propose a comprehensive scheme for the efficient and flexible design of metasurface Salisbury screen (MSS) capable of absorbing the impinging EM wave in an ultra-wide frequency band. We show that properly designed reflective metasurface can be used to substitute the metallic ground of conventional Salisbury screen for generating diverse resonances in a desirable way, thus providing large controllability over the absorption bandwidth. Based on this concept, we establish an equivalent circuit model to qualitatively analysis the resonances in MSS and design algorithms to optimize the overall performance of the MSS. Experiments have been carried out to demonstrate that the absorption bandwidth from 6 GHz to 30 GHz with an efficiency higher than 85% can be achieved by the proposal, which is apparently much larger than that of conventional Salisbury screen (7 GHz - 17 GHz). The proposed concept of MSS could offer opportunities for flexibly designing thin electromagnetic absorbers with simultaneously ultra-wide bandwidth, polarization insensitivity, and wide incident angle, exhibiting promising potentials for many applications such as in EM compatibility, stealth technique, etc.
Integrated thermal and energy management of plug-in hybrid electric vehicles
NASA Astrophysics Data System (ADS)
Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard
2012-10-01
In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.
2011-10-01
Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less
Comparison of turbulence mitigation algorithms
NASA Astrophysics Data System (ADS)
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.
Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga
2013-01-01
High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Park, Seong C; Finnell, John T
2012-01-01
In 2009, Indianapolis launched an electronic medical record system within their ambulances1 and started to exchange patient data with the Indiana Network for Patient Care (INPC) This unique system allows EMS personnel to get important information prior to the patient's arrival to the hospital. In this descriptive study, we found EMS personnel requested patient data on 14% of all transports, with a "success" match rate of 46%, and a match "failure" rate of 17%. The three major factors for causing match "failure" were ZIP code 55%, Patient Name 22%, and Birth date 12%. We conclude that the ZIP code matching process needs to be improved by applying a limitation of 5 digits in ZIP code instead of using ZIP+4 code. Non-ZIP code identifiers may be a better choice due to inaccuracies and changes of the ZIP code in a patient's record.
Quality, Quantity, And Surprise! Trade-Offs In X-Raser ASAT Attrition
NASA Astrophysics Data System (ADS)
Callaham, Michael B.; Scibilia, Frank M.
1984-08-01
In order to characterize the effects of technological superiority, numerical superiority, and pre-emption on space battle outcomes, we have constructed a battle simulation in which "Red" and "Blue" ASATs, each armed with a specified number of x-ray lasers of specified range, move along specified orbits and fire on one another according to a pair of battle management algorithms. The simulated battle proceeds until apparent steady-state force levels are reached. Battle outcomes are characterized by terminal force ratio and by terminal force-exchange ratio as effective weapon range, multiplicity (x-rasers per ASAT), and pre-emptive role are varied parametrically. A major conclusion is that pre-emptive advantage increases with increasing x-raser range and multiplicity (x-rasers per ASAT) and with increasing force size. That is, the "use 'em or lose 'em" dilemma will become more stark as such weapons are refined and proliferated.
Electromagnetic Measurements in an Active Oilfield Environment
NASA Astrophysics Data System (ADS)
Schramm, K. A.; Aldridge, D. F.; Bartel, L. C.; Knox, H. A.; Weiss, C. J.
2015-12-01
An important issue in oilfield development pertains to mapping and monitoring of the fracture distributions (either natural or man-made) controlling subsurface fluid flow. Although microseismic monitoring and analysis have been used for this purpose for several decades, there remain several ambiguities and uncertainties with this approach. We are investigating a novel electromagnetic (EM) technique for detecting and mapping hydraulic fractures in a petroleum reservoir by injecting an electrically conductive contrast agent into an open fracture. The fracture is subsequently illuminated by a strong EM field radiated by a large engineered antenna. Specifically, a grounded electric current source is applied directly to the steel casing of the borehole, either at/near the wellhead or at a deep downhole point. Transient multicomponent EM signals (both electric and magnetic) scattered by the conductivity contrast are then recorded by a surface receiver array. We are presently utilizing advanced 3D numerical modeling algorithms to accurately simulate fracture responses, both before and after insertion of the conductive contrast agent. Model results compare favorably with EM field data recently acquired in a Permian Basin oilfield. However, extraction of the very-low-amplitude fracture signatures from noisy data requires effective noise suppression strategies such as long stacking times, rejection of outliers, and careful treatment of natural magnetotelluric fields. Dealing with the ever-present "episodic EM noise" typical in an active oilfield environment (associated with drilling, pumping, machinery, traffic, etc.) constitutes an ongoing problem. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit
2015-10-01
There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF(2) fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF(2) fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF(2) fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF(2) fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF(2) fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF(2) fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search.
Guo, Yan; Huang, Jingyi; Shi, Zhou; Li, Hongyi
2015-01-01
In coastal China, there is an urgent need to increase land area for agricultural production and urban development, where there is a rapid growing population. One solution is land reclamation from coastal tidelands, but soil salinization is problematic. As such, it is very important to characterize and map the within-field variability of soil salinity in space and time. Conventional methods are often time-consuming, expensive, labor-intensive, and unpractical. Fortunately, proximal sensing has become an important technology in characterizing within-field spatial variability. In this study, we employed the EM38 to study spatial variability of soil salinity in a coastal paddy field. Significant correlation relationship between ECa and EC1:5 (i.e. r >0.9) allowed us to use EM38 data to characterize the spatial variability of soil salinity. Geostatistical methods were used to determine the horizontal spatio-temporal variability of soil salinity over three consecutive years. The study found that the distribution of salinity was heterogeneous and the leaching of salts was more significant in the edges of the study field. By inverting the EM38 data using a Quasi-3D inversion algorithm, the vertical spatio-temporal variability of soil salinity was determined and the leaching of salts over time was easily identified. The methodology of this study can be used as guidance for researchers interested in understanding soil salinity development as well as land managers aiming for effective soil salinity monitoring and management practices. In order to better characterize the variations in soil salinity to a deeper soil profile, the deeper mode of EM38 (i.e., EM38v) as well as other EMI instruments (e.g. DUALEM-421) can be incorporated to conduct Quasi-3D inversions for deeper soil profiles. PMID:26020969
Bettadapura, Radhakrishna; Rasheed, Muhibur; Vollrath, Antje; Bajaj, Chandrajit
2015-01-01
There continue to be increasing occurrences of both atomistic structure models in the PDB (possibly reconstructed from X-ray diffraction or NMR data), and 3D reconstructed cryo-electron microscopy (3D EM) maps (albeit at coarser resolution) of the same or homologous molecule or molecular assembly, deposited in the EMDB. To obtain the best possible structural model of the molecule at the best achievable resolution, and without any missing gaps, one typically aligns (match and fits) the atomistic structure model with the 3D EM map. We discuss a new algorithm and generalized framework, named PF2 fit (Polar Fast Fourier Fitting) for the best possible structural alignment of atomistic structures with 3D EM. While PF2 fit enables only a rigid, six dimensional (6D) alignment method, it augments prior work on 6D X-ray structure and 3D EM alignment in multiple ways: Scoring. PF2 fit includes a new scoring scheme that, in addition to rewarding overlaps between the volumes occupied by the atomistic structure and 3D EM map, rewards overlaps between the volumes complementary to them. We quantitatively demonstrate how this new complementary scoring scheme improves upon existing approaches. PF2 fit also includes two scoring functions, the non-uniform exterior penalty and the skeleton-secondary structure score, and implements the scattering potential score as an alternative to traditional Gaussian blurring. Search. PF2 fit utilizes a fast polar Fourier search scheme, whose main advantage is the ability to search over uniformly and adaptively sampled subsets of the space of rigid-body motions. PF2 fit also implements a new reranking search and scoring methodology that considerably improves alignment metrics in results obtained from the initial search. PMID:26469938
Guo, Yan; Huang, Jingyi; Shi, Zhou; Li, Hongyi
2015-01-01
In coastal China, there is an urgent need to increase land area for agricultural production and urban development, where there is a rapid growing population. One solution is land reclamation from coastal tidelands, but soil salinization is problematic. As such, it is very important to characterize and map the within-field variability of soil salinity in space and time. Conventional methods are often time-consuming, expensive, labor-intensive, and unpractical. Fortunately, proximal sensing has become an important technology in characterizing within-field spatial variability. In this study, we employed the EM38 to study spatial variability of soil salinity in a coastal paddy field. Significant correlation relationship between ECa and EC1:5 (i.e. r >0.9) allowed us to use EM38 data to characterize the spatial variability of soil salinity. Geostatistical methods were used to determine the horizontal spatio-temporal variability of soil salinity over three consecutive years. The study found that the distribution of salinity was heterogeneous and the leaching of salts was more significant in the edges of the study field. By inverting the EM38 data using a Quasi-3D inversion algorithm, the vertical spatio-temporal variability of soil salinity was determined and the leaching of salts over time was easily identified. The methodology of this study can be used as guidance for researchers interested in understanding soil salinity development as well as land managers aiming for effective soil salinity monitoring and management practices. In order to better characterize the variations in soil salinity to a deeper soil profile, the deeper mode of EM38 (i.e., EM38v) as well as other EMI instruments (e.g. DUALEM-421) can be incorporated to conduct Quasi-3D inversions for deeper soil profiles.
jInv: A Modular and Scalable Framework for Electromagnetic Inverse Problems
NASA Astrophysics Data System (ADS)
Belliveau, P. T.; Haber, E.
2016-12-01
Inversion is a key tool in the interpretation of geophysical electromagnetic (EM) data. Three-dimensional (3D) EM inversion is very computationally expensive and practical software for inverting large 3D EM surveys must be able to take advantage of high performance computing (HPC) resources. It has traditionally been difficult to achieve those goals in a high level dynamic programming environment that allows rapid development and testing of new algorithms, which is important in a research setting. With those goals in mind, we have developed jInv, a framework for PDE constrained parameter estimation problems. jInv provides optimization and regularization routines, a framework for user defined forward problems, and interfaces to several direct and iterative solvers for sparse linear systems. The forward modeling framework provides finite volume discretizations of differential operators on rectangular tensor product meshes and tetrahedral unstructured meshes that can be used to easily construct forward modeling and sensitivity routines for forward problems described by partial differential equations. jInv is written in the emerging programming language Julia. Julia is a dynamic language targeted at the computational science community with a focus on high performance and native support for parallel programming. We have developed frequency and time-domain EM forward modeling and sensitivity routines for jInv. We will illustrate its capabilities and performance with two synthetic time-domain EM inversion examples. First, in airborne surveys, which use many sources, we achieve distributed memory parallelism by decoupling the forward and inverse meshes and performing forward modeling for each source on small, locally refined meshes. Secondly, we invert grounded source time-domain data from a gradient array style induced polarization survey using a novel time-stepping technique that allows us to compute data from different time-steps in parallel. These examples both show that it is possible to invert large scale 3D time-domain EM datasets within a modular, extensible framework written in a high-level, easy to use programming language.
Recent progress and future directions in protein-protein docking.
Ritchie, David W
2008-02-01
This article gives an overview of recent progress in protein-protein docking and it identifies several directions for future research. Recent results from the CAPRI blind docking experiments show that docking algorithms are steadily improving in both reliability and accuracy. Current docking algorithms employ a range of efficient search and scoring strategies, including e.g. fast Fourier transform correlations, geometric hashing, and Monte Carlo techniques. These approaches can often produce a relatively small list of up to a few thousand orientations, amongst which a near-native binding mode is often observed. However, despite the use of improved scoring functions which typically include models of desolvation, hydrophobicity, and electrostatics, current algorithms still have difficulty in identifying the correct solution from the list of false positives, or decoys. Nonetheless, significant progress is being made through better use of bioinformatics, biochemical, and biophysical information such as e.g. sequence conservation analysis, protein interaction databases, alanine scanning, and NMR residual dipolar coupling restraints to help identify key binding residues. Promising new approaches to incorporate models of protein flexibility during docking are being developed, including the use of molecular dynamics snapshots, rotameric and off-rotamer searches, internal coordinate mechanics, and principal component analysis based techniques. Some investigators now use explicit solvent models in their docking protocols. Many of these approaches can be computationally intensive, although new silicon chip technologies such as programmable graphics processor units are beginning to offer competitive alternatives to conventional high performance computer systems. As cryo-EM techniques improve apace, docking NMR and X-ray protein structures into low resolution EM density maps is helping to bridge the resolution gap between these complementary techniques. The use of symmetry and fragment assembly constraints are also helping to make possible docking-based predictions of large multimeric protein complexes. In the near future, the closer integration of docking algorithms with protein interface prediction software, structural databases, and sequence analysis techniques should help produce better predictions of protein interaction networks and more accurate structural models of the fundamental molecular interactions within the cell.
Extraction of tidal channel networks from airborne scanning laser altimetry
NASA Astrophysics Data System (ADS)
Mason, David C.; Scott, Tania R.; Wang, Hai-Jing
Tidal channel networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. This paper describes a semi-automatic technique developed to extract networks from high-resolution LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low-level algorithms first extract channel fragments based mainly on image properties then a high-level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism. The algorithm may be extended to extract networks from aerial photographs as well as LiDAR data. Its performance is illustrated using LiDAR data of two study sites, the River Ems, Germany and the Venice Lagoon. For the River Ems data, the error of omission for the automatic channel extractor is 26%, partly because numerous small channels are lost because they fall below the edge threshold, though these are less than 10 cm deep and unlikely to be hydraulically significant. The error of commission is lower, at 11%. For the Venice Lagoon data, the error of omission is 14%, but the error of commission is 42%, due partly to the difficulty of interpreting channels in these natural scenes. As a benchmark, previous work has shown that this type of algorithm specifically designed for extracting tidal networks from LiDAR data is able to achieve substantially improved results compared with those obtained using standard algorithms for drainage network extraction from Digital Terrain Models.
Baldewsing, Radj A; Schaar, Johannes A; Mastik, Frits; Oomens, Cees W J; van der Steen, Antonius F W
2005-04-01
Intravascular ultrasound (IVUS) elastography visualizes local radial strain of arteries in so-called elastograms to detect rupture-prone plaques. However, due to the unknown arterial stress distribution these elastograms cannot be directly interpreted as a morphology and material composition image. To overcome this limitation we have developed a method that reconstructs a Young's modulus image from an elastogram. This method is especially suited for thin-cap fibroatheromas (TCFAs), i.e., plaques with a media region containing a lipid pool covered by a cap. Reconstruction is done by a minimization algorithm that matches the strain image output, calculated with a parametric finite element model (PFEM) representation of a TCFA, to an elastogram by iteratively updating the PFEM geometry and material parameters. These geometry parameters delineate the TCFA media, lipid pool and cap regions by circles. The material parameter for each region is a Young's modulus, EM, EL, and EC, respectively. The method was successfully tested on computer-simulated TCFAs (n = 2), one defined by circles, the other by tracing TCFA histology, and additionally on a physical phantom (n = 1) having a stiff wall (measured EM = 16.8 kPa) with an eccentric soft region (measured EL = 4.2 kPa). Finally, it was applied on human coronary plaques in vitro (n = 1) and in vivo (n = 1). The corresponding simulated and measured elastograms of these plaques showed radial strain values from 0% up to 2% at a pressure differential of 20, 20, 1, 20, and 1 mmHg respectively. The used/reconstructed Young's moduli [kPa] were for the circular plaque EL = 50/66, EM = 1500/1484, EC = 2000/2047, for the traced plaque EL = 25/1, EM = 1000/1148, EC = 1500/1491, for the phantom EL = 4.2/4 kPa, EM = 16.8/16, for the in vitro plaque EL = n.a./29, EM = n.a./647, EC = n.a./1784 kPa and for the in vivo plaque EL = n.a./2, EM = n.a./188, Ec = n.a./188 kPa.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to themore » full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before approaching more computationally cumbersome three-dimensional problems.« less
2017-01-01
Inverted effective ONVMS for an M30 Bomb in a test-stand scenario. The target is oriented 45 degrees at a depth of 150 cm depth (top) and oriented...vertically at a depth of 210 cm (bottom). The red lines are the total ONVMS for a library AN M30 Bomb , and the other lines correspond to the...Centimeter DE Differential Evolution DLL Dynamic Link Libraries DoD Department of Defense EM Electromagnetic EMA Expectation
Sharpening spots: correcting for bleedover in cDNA array images.
Therneau, Terry; Tschumper, Renee C; Jelinek, Diane
2002-03-01
For cDNA array methods that depend on imaging of a radiolabel, we show that bleedover of one spot onto another, due to the gap between the array and the imaging media, can be a major problem. The images can be sharpened, however, using a blind convolution method based on the EM algorithm. The sharpened images look like a set of donuts, which concurs with our knowledge of the spotting process. Oversharpened images are actually useful as well, in locating the centers of each spot.
Detection of delamination defects in CFRP materials using ultrasonic signal processing.
Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak
2008-12-01
In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
Rating Movies and Rating the Raters Who Rate Them
Zhou, Hua; Lange, Kenneth
2010-01-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data. PMID:20802818
Rating Movies and Rating the Raters Who Rate Them.
Zhou, Hua; Lange, Kenneth
2009-11-01
The movie distribution company Netflix has generated considerable buzz in the statistics community by offering a million dollar prize for improvements to its movie rating system. Among the statisticians and computer scientists who have disclosed their techniques, the emphasis has been on machine learning approaches. This article has the modest goal of discussing a simple model for movie rating and other forms of democratic rating. Because the model involves a large number of parameters, it is nontrivial to carry out maximum likelihood estimation. Here we derive a straightforward EM algorithm from the perspective of the more general MM algorithm. The algorithm is capable of finding the global maximum on a likelihood landscape littered with inferior modes. We apply two variants of the model to a dataset from the MovieLens archive and compare their results. Our model identifies quirky raters, redefines the raw rankings, and permits imputation of missing ratings. The model is intended to stimulate discussion and development of better theory rather than to win the prize. It has the added benefit of introducing readers to some of the issues connected with analyzing high-dimensional data.
Xilmass: A New Approach toward the Identification of Cross-Linked Peptides.
Yılmaz, Şule; Drepper, Friedel; Hulstaert, Niels; Černič, Maša; Gevaert, Kris; Economou, Anastassios; Warscheid, Bettina; Martens, Lennart; Vandermarliere, Elien
2016-10-18
Chemical cross-linking coupled with mass spectrometry plays an important role in unravelling protein interactions, especially weak and transient ones. Moreover, cross-linking complements several structural determination approaches such as cryo-EM. Although several computational approaches are available for the annotation of spectra obtained from cross-linked peptides, there remains room for improvement. Here, we present Xilmass, a novel algorithm to identify cross-linked peptides that introduces two new concepts: (i) the cross-linked peptides are represented in the search database such that the cross-linking sites are explicitly encoded, and (ii) the scoring function derived from the Andromeda algorithm was adapted to score against a theoretical tandem mass spectrometry (MS/MS) spectrum that contains the peaks from all possible fragment ions of a cross-linked peptide pair. The performance of Xilmass was evaluated against the recently published Kojak and the popular pLink algorithms on a calmodulin-plectin complex data set, as well as three additional, published data sets. The results show that Xilmass typically had the highest number of identified distinct cross-linked sites and also the highest number of predicted cross-linked sites.
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
NASA Astrophysics Data System (ADS)
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
A high precision position sensor design and its signal processing algorithm for a maglev train.
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
NASA Technical Reports Server (NTRS)
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
Load shifting with the use of home energy management system implemented in FPGA
NASA Astrophysics Data System (ADS)
Bazydło, Grzegorz; Wermiński, Szymon
2017-08-01
The increases for power demand in the Electrical Power System (EPS) causes a significant increase of power in daily load curve and transmission line overload. The large variability in energy consumption in EPS combined with unpredictable weather events can lead to a situation in which to save the stability of the EPS, the power limits must be introduced or even industrial customers in a given area have to be disconnected, which causes financial losses. Nowadays, a Transmission System Operator is looking for additional solutions to reduce peak power, because existing approaches (mainly building new intervention power unit or tariff programs) are not satisfactory due to the high cost of services in combination with insufficient power reduction effect. The paper presents an approach to load shifting with the use of home Energy Management System (EMS) installed at small end-users. The home energy management algorithm, executed by EMS controller, is modeled using Unified Modeling Language (UML). Then, the UML model is translated into Verilog description, and is finally implemented in the Field Programmable Gate Arrays. The advantages of the proposed approach are the relatively low cost of reduction service, small loss of end-users' comfort, and the convenient maintenance of EMS. A practical example illustrating the proposed approach and calculation of potential gains from its implementation are also presented.
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
Nitzschke, Rainer; Doehn, Christoph; Kersten, Jan F; Blanz, Julian; Kalwa, Tobias J; Scotti, Norman A; Kubitz, Jens C
2017-04-04
The present study evaluates whether the quality of advanced cardiac life support (ALS) is improved with an interactive prototype assist device. This device consists of an automated external defibrillator linked to a ventilator and provides synchronised visual and acoustic instructions for guidance through the ALS algorithm and assistance for face-mask ventilations. We compared the cardiopulmonary resuscitation (CPR) quality of emergency medical system (EMS) staff members using the study device or standard equipment in a mannequin simulation study with a prospective, controlled, randomised cross-over study design. Main outcome was the effect of the study device compared to the standard equipment and the effect of the number of prior ALS trainings of the EMS staff on the CPR quality. Data were analysed using analyses of covariance (ANCOVA) and binary logistic regression, accounting for the study design. In 106 simulations of 56 two-person rescuer teams, the mean hands-off time was 24.5% with study equipment and 23.5% with standard equipment (Difference 1.0% (95% CI: -0.4 to 2.5%); p = 0.156). With both types of equipment, the hands-off time decreased with an increasing cumulative number of previous CPR trainings (p = 0.042). The study equipment reduced the mean time until administration of adrenaline (epinephrine) by 23 s (p = 0.003) and that of amiodarone by 17 s (p = 0.016). It also increased the mean number of changes in the person doing chest compressions (0.6 per simulation; p < 0.001) and decreased the mean number of chest compressions (2.8 per minute; p = 0.022) and the mean number of ventilations (1.8 per minute; p < 0.001). The chance of administering amiodarone at the appropriate time was higher, with an odds ratio of 4.15, with the use of the study equipment CPR.com compared to the standard equipment (p = 0.004). With an increasing number of prior CPR trainings, the time intervals in the ALS algorithm until the defibrillations decreased with standard equipment but increased with the study device. EMS staff with limited training in CPR profit from guidance through the ALS algorithm by the study device. However, the study device somehow reduced the ALS quality of well-trained rescuers and thus can only be recommended for ALS provider with limited experience.
Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors
NASA Astrophysics Data System (ADS)
Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin
2014-03-01
One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
A workflow for the automatic segmentation of organelles in electron microscopy image stacks
Perez, Alex J.; Seyedhosseini, Mojtaba; Deerinck, Thomas J.; Bushong, Eric A.; Panda, Satchidananda; Tasdizen, Tolga; Ellisman, Mark H.
2014-01-01
Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime. PMID:25426032
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Implementation of collisions on GPU architecture in the Vorpal code
NASA Astrophysics Data System (ADS)
Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John
2017-10-01
The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Mixture Model and MDSDCA for Textual Data
NASA Astrophysics Data System (ADS)
Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît
E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.
Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.
2013-01-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454
Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture
NASA Astrophysics Data System (ADS)
Fonseca, Ricardo
2017-10-01
Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.
Maximum likelihood estimation for Cox's regression model under nested case-control sampling.
Scheike, Thomas H; Juul, Anders
2004-04-01
Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.
Nizam-Uddin, N; Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors.
Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors. PMID:28840125
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
Test of 3D CT reconstructions by EM + TV algorithm from undersampled data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da
2013-05-06
Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less
2013-08-01
G., Antenna Theory and Microstrip Antennas , CRC Press, 2010. 9. Hu, B. and W. C. Chew, “Fast inhomogeneous plane wave algorithm for electromagnetic...circuit board (PCB) structures such as transmission lines and antennas [8]. The SDGF/SI technique expresses the EM fields as SIs, which must be evaluated...Transactions on Antennas and Propagation, Vol. 53, No. 11, 3785–3791, 2005. 2. Aksun, M. and G. Dural, “Clarification of issues on the closed-form