Sample records for prior based iterative

  1. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  2. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  3. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  4. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  5. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  6. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  7. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  8. PET Image Reconstruction Incorporating 3D Mean-Median Sinogram Filtering

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Rahni, A. A. Abd; Nordin, A. J.; Hashim, S.; Marhaban, M. H.

    2016-02-01

    Positron Emission Tomography (PET) projection data or sinogram contained poor statistics and randomness that produced noisy PET images. In order to improve the PET image, we proposed an implementation of pre-reconstruction sinogram filtering based on 3D mean-median filter. The proposed filter is designed based on three aims; to minimise angular blurring artifacts, to smooth flat region and to preserve the edges in the reconstructed PET image. The performance of the pre-reconstruction sinogram filter prior to three established reconstruction methods namely filtered-backprojection (FBP), Maximum likelihood expectation maximization-Ordered Subset (OSEM) and OSEM with median root prior (OSEM-MRP) is investigated using simulated NCAT phantom PET sinogram as generated by the PET Analytical Simulator (ASIM). The improvement on the quality of the reconstructed images with and without sinogram filtering is assessed according to visual as well as quantitative evaluation based on global signal to noise ratio (SNR), local SNR, contrast to noise ratio (CNR) and edge preservation capability. Further analysis on the achieved improvement is also carried out specific to iterative OSEM and OSEM-MRP reconstruction methods with and without pre-reconstruction filtering in terms of contrast recovery curve (CRC) versus noise trade off, normalised mean square error versus iteration, local CNR versus iteration and lesion detectability. Overall, satisfactory results are obtained from both visual and quantitative evaluations.

  9. Chicago Classification Criteria of Esophageal Motility Disorders Defined in High Resolution Esophageal Pressure Topography (EPT)†

    PubMed Central

    Bredenoord, Albert J; Fox, Mark; Kahrilas, Peter J; Pandolfino, John E; Schwizer, Werner; Smout, AJPM; Conklin, Jeffrey L; Cook, Ian J; Gyawali, Prakash; Hebbard, Geoffrey; Holloway, Richard H; Ke, Meiyun; Keller, Jutta; Mittal, Ravinder K; Peters, Jeff; Richter, Joel; Roman, Sabine; Rommel, Nathalie; Sifrim, Daniel; Tutuian, Radu; Valdovinos, Miguel; Vela, Marcelo F; Zerbib, Frank

    2011-01-01

    Background The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been, and will continue to be, an evolutionary process, molded first by published evidence pertinent to the clinical interpretation of high resolution manometry (HRM) studies and secondarily by group experience when suitable evidence is lacking. Methods This publication summarizes the state of our knowledge as of the most recent meeting of the International High Resolution Manometry Working Group in Ascona, Switzerland in April 2011. The prior iteration of the Chicago Classification was updated through a process of literature analysis and discussion. Key Results The major changes in this document from the prior iteration are largely attributable to research studies published since the prior iteration, in many cases research conducted in response to prior deliberations of the International High Resolution Manometry Working Group. The classification now includes criteria for subtyping achalasia, EGJ outflow obstruction, motility disorders not observed in normal subjects (Distal esophageal spasm, Hypercontractile esophagus, and Absent peristalsis), and statistically defined peristaltic abnormalities (Weak peristalsis, Frequent failed peristalsis, Rapid contractions with normal latency, and Hypertensive peristalsis). Conclusions & Inferences The Chicago Classification is an algorithmic scheme for diagnosis of esophageal motility disorders from clinical EPT studies. Moving forward, we anticipate continuing this process with increased emphasis placed on natural history studies and outcome data based on the classification. PMID:22248109

  10. Chicago classification criteria of esophageal motility disorders defined in high resolution esophageal pressure topography.

    PubMed

    Bredenoord, A J; Fox, M; Kahrilas, P J; Pandolfino, J E; Schwizer, W; Smout, A J P M

    2012-03-01

    The Chicago Classification of esophageal motility was developed to facilitate the interpretation of clinical high resolution esophageal pressure topography (EPT) studies, concurrent with the widespread adoption of this technology into clinical practice. The Chicago Classification has been an evolutionary process, molded first by published evidence pertinent to the clinical interpretation of high resolution manometry (HRM) studies and secondarily by group experience when suitable evidence is lacking. This publication summarizes the state of our knowledge as of the most recent meeting of the International High Resolution Manometry Working Group in Ascona, Switzerland in April 2011. The prior iteration of the Chicago Classification was updated through a process of literature analysis and discussion. The major changes in this document from the prior iteration are largely attributable to research studies published since the prior iteration, in many cases research conducted in response to prior deliberations of the International High Resolution Manometry Working Group. The classification now includes criteria for subtyping achalasia, EGJ outflow obstruction, motility disorders not observed in normal subjects (Distal esophageal spasm, Hypercontractile esophagus, and Absent peristalsis), and statistically defined peristaltic abnormalities (Weak peristalsis, Frequent failed peristalsis, Rapid contractions with normal latency, and Hypertensive peristalsis). The Chicago Classification is an algorithmic scheme for diagnosis of esophageal motility disorders from clinical EPT studies. Moving forward, we anticipate continuing this process with increased emphasis placed on natural history studies and outcome data based on the classification. © 2012 Blackwell Publishing Ltd.

  11. Random walks with shape prior for cochlea segmentation in ex vivo μCT.

    PubMed

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel Angel

    2016-09-01

    Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we previously proposed the use of a high-resolution model built from [Formula: see text] images and then adapted to patient-specific clinical CT scans. As the accuracy of the model is dependent on the precision of the original segmentation, it is extremely important to have accurate [Formula: see text] segmentation algorithms. We propose a new framework for cochlea segmentation in ex vivo [Formula: see text] images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. We tested the proposed approach in ten [Formula: see text] data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17(2):236-253, 2013) and constrained random walks (Li et al. in Advances in image and video technology. Springer, Berlin, pp 215-226, 2012). Our approach demonstrated higher accuracy results due to the probability density model constituted by the region term and shape prior information weighed by a confidence map. The weighted combination of the distance-based shape prior with a region term into random walks provides accurate segmentations of the cochlea. The experiments suggest that the proposed approach is robust for cochlea segmentation.

  12. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  13. Indian Test Facility (INTF) and its updates

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, M.; Chakraborty, A.; Rotti, C.; Joshi, J.; Patel, H.; Yadav, A.; Shah, S.; Tyagi, H.; Parmar, D.; Sudhir, Dass; Gahlaut, A.; Bansal, G.; Soni, J.; Pandya, K.; Pandey, R.; Yadav, R.; Nagaraju, M. V.; Mahesh, V.; Pillai, S.; Sharma, D.; Singh, D.; Bhuyan, M.; Mistry, H.; Parmar, K.; Patel, M.; Patel, K.; Prajapati, B.; Shishangiya, H.; Vishnudev, M.; Bhagora, J.

    2017-04-01

    To characterize ITER Diagnostic Neutral Beam (DNB) system with full specification and to support IPR’s negative ion beam based neutral beam injector (NBI) system development program, a R&D facility, named INTF is under commissioning phase. Implementation of a successful DNB at ITER requires several challenges need to be overcome. These issues are related to the negative ion production, its neutralization and corresponding neutral beam transport over the path lengths of ∼ 20.67 m to reach ITER plasma. DNB is a procurement package for INDIA, as an in-kind contribution to ITER. Since ITER is considered as a nuclear facility, minimum diagnostic systems, linked with safe operation of the machine are planned to be incorporated in it and so there is difficulty to characterize DNB after onsite commissioning. Therefore, the delivery of DNB to ITER will be benefited if DNB is operated and characterized prior to onsite commissioning. INTF has been envisaged to be operational with the large size ion source activities in the similar timeline, as with the SPIDER (RFX, Padova) facility. This paper describes some of the development updates of the facility.

  14. Pipeline Processing with an Iterative, Context-based Detection Model

    DTIC Science & Technology

    2014-04-19

    stripping the incoming data stream of repeating and irrelevant signals prior to running primary detectors , adaptive beamforming and matched field processing...framework, pattern detectors , correlation detectors , subspace detectors , matched field detectors , nuclear explosion monitoring 16. SECURITY CLASSIFICATION...10 5. Teleseismic paths from earthquakes in

  15. GPU implementation of prior image constrained compressed sensing (PICCS)

    NASA Astrophysics Data System (ADS)

    Nett, Brian E.; Tang, Jie; Chen, Guang-Hong

    2010-04-01

    The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.

  16. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  17. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction.

    PubMed

    Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc

    2017-11-01

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. 3D reconstruction of the magnetic vector potential using model based iterative reconstruction

    DOE PAGES

    Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...

    2017-07-03

    Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less

  19. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Photoacoustic image reconstruction via deep learning

    NASA Astrophysics Data System (ADS)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  1. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  2. Interactive and Collaborative Professional Development for In-Service History Teachers

    ERIC Educational Resources Information Center

    Callahan, Cory; Saye, John; Brush, Thomas

    2016-01-01

    This article advances a continuing line of inquiry into an innovative teacher-support program intended to help in-service history teachers develop professional teaching knowledge for inquiry-based history instruction. Two prior iterations informed our design and use of professional development materials; they also informed the implementation…

  3. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  4. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  5. Influence of Iterative Reconstruction Algorithms on PET Image Resolution

    NASA Astrophysics Data System (ADS)

    Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.

  6. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  7. Feature Based Retention Time Alignment for Improved HDX MS Analysis

    NASA Astrophysics Data System (ADS)

    Venable, John D.; Scuba, William; Brock, Ansgar

    2013-04-01

    An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.

  8. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    PubMed

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  9. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry

    NASA Astrophysics Data System (ADS)

    Bedggood, Phillip; Metha, Andrew

    2010-11-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  10. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  11. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  12. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  13. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  14. Toward a dose reduction strategy using model-based reconstruction with limited-angle tomosynthesis

    NASA Astrophysics Data System (ADS)

    Haneda, Eri; Tkaczyk, J. E.; Palma, Giovanni; Iordache, Rǎzvan; Zelakiewicz, Scott; Muller, Serge; De Man, Bruno

    2014-03-01

    Model-based iterative reconstruction (MBIR) is an emerging technique for several imaging modalities and appli- cations including medical CT, security CT, PET, and microscopy. Its success derives from an ability to preserve image resolution and perceived diagnostic quality under impressively reduced signal level. MBIR typically uses a cost optimization framework that models system geometry, photon statistics, and prior knowledge of the recon- structed volume. The challenge of tomosynthetic geometries is that the inverse problem becomes more ill-posed due to the limited angles, meaning the volumetric image solution is not uniquely determined by the incom- pletely sampled projection data. Furthermore, low signal level conditions introduce additional challenges due to noise. A fundamental strength of MBIR for limited-views and limited-angle is that it provides a framework for constraining the solution consistent with prior knowledge of expected image characteristics. In this study, we analyze through simulation the capability of MBIR with respect to prior modeling components for limited-views, limited-angle digital breast tomosynthesis (DBT) under low dose conditions. A comparison to ground truth phantoms shows that MBIR with regularization achieves a higher level of fidelity and lower level of blurring and streaking artifacts compared to other state of the art iterative reconstructions, especially for high contrast objects. The benefit of contrast preservation along with less artifacts may lead to detectability improvement of microcalcification for more accurate cancer diagnosis.

  15. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  16. Community-Engagement Strategies of the Developmental Disabilities Practice-Based Research Network (DD-PBRN)

    PubMed Central

    Tyler, Carl; Werner, James J.

    2016-01-01

    There is often a rich but untold history of events that occurred and relationships that formed prior to the launching of a practice-based research network (PBRN.) This is particularly the case in PBRNs that are community-based and comprised of partnerships outside of the health care system. In this article we summarize an organizational "prenatal history" prior to the birth of a PBRN devoted to persons with developmental disabilities. Using a case study approach, this article describes the historical events that preceded and fostered the evolution of this PBRN and contrasts how the processes leading to the creation of this multi-stakeholder community-based PBRN differ from those of typical academic-clinical practice PBRNs. We propose potential advantages and complexities inherent to this newest iteration of PBRNs. PMID:25381081

  17. Advanced Software for Analysis of High-Speed Rolling-Element Bearings

    NASA Technical Reports Server (NTRS)

    Poplawski, J. V.; Rumbarger, J. H.; Peters, S. M.; Galatis, H.; Flower, R.

    2003-01-01

    COBRA-AHS is a package of advanced software for analysis of rigid or flexible shaft systems supported by rolling-element bearings operating at high speeds under complex mechanical and thermal loads. These loads can include centrifugal and thermal loads generated by motions of bearing components. COBRA-AHS offers several improvements over prior commercial bearing-analysis programs: It includes innovative probabilistic fatigue-life-estimating software that provides for computation of three-dimensional stress fields and incorporates stress-based (in contradistinction to prior load-based) mathematical models of fatigue life. It interacts automatically with the ANSYS finite-element code to generate finite-element models for estimating distributions of temperature and temperature-induced changes in dimensions in iterative thermal/dimensional analyses: thus, for example, it can be used to predict changes in clearances and thermal lockup. COBRA-AHS provides an improved graphical user interface that facilitates the iterative cycle of analysis and design by providing analysis results quickly in graphical form, enabling the user to control interactive runs without leaving the program environment, and facilitating transfer of plots and printed results for inclusion in design reports. Additional features include roller-edge stress prediction and influence of shaft and housing distortion on bearing performance.

  18. Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints

    NASA Astrophysics Data System (ADS)

    Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena

    2012-04-01

    Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.

  19. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  20. Iterative approach of dual regression with a sparse prior enhances the performance of independent component analysis for group functional magnetic resonance imaging (fMRI) data.

    PubMed

    Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan

    2012-12-01

    This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  2. Information-reduced Carrier Synchronization of Iterative Decoded BPSK and QPSK using Soft Decision (Extrinsic) Feedback

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Valles, Esteban; Jones, Christopher

    2008-01-01

    This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.

  3. Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2017-05-01

    Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.

  4. Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures.

    PubMed

    Loss, Leandro A; Bebis, George; Parvin, Bahram

    2011-08-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.

  5. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  6. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  7. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    PubMed

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  8. Iterative Reconstruction for X-Ray Computed Tomography using Prior-Image Induced Nonlocal Regularization

    PubMed Central

    Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-01-01

    Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272

  9. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  10. Static and Dynamic Performance of Newly Developed ITER Relevant Insulation Systems after Neutron Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2006-03-01

    Fiber reinforced plastics will be used as insulation systems for the superconducting magnet coils of ITER. The fast neutron and gamma radiation environment present at the magnet location will lead to serious material degradation, particularly of the insulation. For this reason, advanced radiation-hard resin systems are of special interest. In this study various R-glass fiber / Kapton reinforced DGEBA epoxy and cyanate ester composites fabricated by the vacuum pressure impregnation method were investigated. All systems were irradiated at ambient temperature (340 K) in the TRIGA reactor (Vienna) to a fast neutron fluence of 1×1022 m-2 (E>0.1 MeV). Short-beam shear and static tensile tests were carried out at 77 K prior to and after irradiation. In addition, tension-tension fatigue measurements were used in order to assess the mechanical performance of the insulation systems under the pulsed operation conditions of ITER. For the cyanate ester based system the influence of interleaving Kapton layers on the static and dynamic material behavior was investigated as well.

  11. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  12. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  13. Interior tomography from differential phase contrast data via Hilbert transform based on spline functions

    NASA Astrophysics Data System (ADS)

    Yang, Qingsong; Cong, Wenxiang; Wang, Ge

    2016-10-01

    X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.

  14. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    PubMed Central

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2011-01-01

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913

  15. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali

    2011-04-15

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less

  16. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  17. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  18. Design and evaluation of a web-based decision support tool for district-level disease surveillance in a low-resource setting

    PubMed Central

    Pore, Meenal; Sengeh, David M.; Mugambi, Purity; Purswani, Nuri V.; Sesay, Tom; Arnold, Anna Lena; Tran, Anh-Minh A.; Myers, Ralph

    2017-01-01

    During the 2014 West African Ebola Virus outbreak it became apparent that the initial response to the outbreak was hampered by limitations in the collection, aggregation, analysis and use of data for intervention planning. As part of the post-Ebola recovery phase, IBM Research Africa partnered with the Port Loko District Health Management Team (DHMT) in Sierra Leone and GOAL Global, to design, implement and deploy a web-based decision support tool for district-level disease surveillance. This paper discusses the design process and the functionality of the first version of the system. The paper presents evaluation results prior to a pilot deployment and identifies features for future iterations. A qualitative assessment of the tool prior to pilot deployment indicates that it improves the timeliness and ease of using data for making decisions at the DHMT level. PMID:29854209

  19. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  20. Analog Design for Digital Deployment of a Serious Leadership Game

    NASA Technical Reports Server (NTRS)

    Maxwell, Nicholas; Lang, Tristan; Herman, Jeffrey L.; Phares, Richard

    2012-01-01

    This paper presents the design, development, and user testing of a leadership development simulation. The authors share lessons learned from using a design process for a board game to allow for quick and inexpensive revision cycles during the development of a serious leadership development game. The goal of this leadership simulation is to accelerate the development of leadership capacity in high-potential mid-level managers (GS-15 level) in a federal government agency. Simulation design included a mixed-method needs analysis, using both quantitative and qualitative approaches to determine organizational leadership needs. Eight design iterations were conducted, including three user testing phases. Three re-design iterations followed initial development, enabling game testing as part of comprehensive instructional events. Subsequent design, development and testing processes targeted digital application to a computer- and tablet-based environment. Recommendations include pros and cons of development and learner testing of an initial analog simulation prior to full digital simulation development.

  1. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  2. Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping

    2015-09-15

    Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less

  3. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  4. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  5. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  6. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  7. Complete elliptical ring geometry provides energy and instrument calibration for synchrotron-based two-dimensional X-ray diffraction

    PubMed Central

    Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas

    2013-01-01

    A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840

  8. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  9. Iterative CT shading correction with no prior information

    NASA Astrophysics Data System (ADS)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.

  10. Iterative reconstruction for CT perfusion with a prior-image induced hybrid nonlocal means regularization: Phantom studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua

    2016-04-15

    Purpose: In computed tomography perfusion (CTP) imaging, an initial phase CT acquired with a high-dose protocol can be used to improve the image quality of later phase CT acquired with a low-dose protocol. For dynamic regions, signals in the later low-dose CT may not be completely recovered if the initial CT heavily regularizes the iterative reconstruction process. The authors propose a hybrid nonlocal means (hNLM) regularization model for iterative reconstruction of low-dose CTP to overcome the limitation of the conventional prior-image induced penalty. Methods: The hybrid penalty was constructed by combining the NLM of the initial phase high-dose CT inmore » the stationary region and later phase low-dose CT in the dynamic region. The stationary and dynamic regions were determined by the similarity between the initial high-dose scan and later low-dose scan. The similarity was defined as a Gaussian kernel-based distance between the patch-window of the same pixel in the two scans, and its measurement was then used to weigh the influence of the initial high-dose CT. For regions with high similarity (e.g., stationary region), initial high-dose CT played a dominant role for regularizing the solution. For regions with low similarity (e.g., dynamic region), the regularization relied on a low-dose scan itself. This new hNLM penalty was incorporated into the penalized weighted least-squares (PWLS) for CTP reconstruction. Digital and physical phantom studies were performed to evaluate the PWLS-hNLM algorithm. Results: Both phantom studies showed that the PWLS-hNLM algorithm is superior to the conventional prior-image induced penalty term without considering the signal changes within the dynamic region. In the dynamic region of the Catphan phantom, the reconstruction error measured by root mean square error was reduced by 42.9% in PWLS-hNLM reconstructed image. Conclusions: The PWLS-hNLM algorithm can effectively use the initial high-dose CT to reconstruct low-dose CTP in the stationary region while reducing its influence in the dynamic region.« less

  11. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  12. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  13. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y

    2015-06-15

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone andmore » with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that significant further development will be required to make CBCT useful to adaptive radiotherapy.« less

  14. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  15. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao

    2016-10-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  16. Advanced prior modeling for 3D bright field electron tomography

    NASA Astrophysics Data System (ADS)

    Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.

    2015-03-01

    Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.

  17. Measurement of two-dimensional thickness of micro-patterned thin film based on image restoration in a spectroscopic imaging reflectometer.

    PubMed

    Kim, Min-Gab; Kim, Jin-Yong

    2018-05-01

    In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.

  18. A Guided Online and Mobile Self-Help Program for Individuals With Eating Disorders: An Iterative Engagement and Usability Study.

    PubMed

    Nitsch, Martina; Dimopoulos, Christina N; Flaschberger, Edith; Saffran, Kristina; Kruger, Jenna F; Garlock, Lindsay; Wilfley, Denise E; Taylor, Craig B; Jones, Megan

    2016-01-11

    Numerous digital health interventions have been developed for mental health promotion and intervention, including eating disorders. Efficacy of many interventions has been evaluated, yet knowledge about reasons for dropout and poor adherence is scarce. Most digital health intervention studies lack appropriate research design and methods to investigate individual engagement issues. User engagement and program usability are inextricably linked, making usability studies vital in understanding and improving engagement. The aim of this study was to explore engagement and corresponding usability issues of the Healthy Body Image Program-a guided online intervention for individuals with body image concerns or eating disorders. The secondary aim was to demonstrate the value of usability research in order to investigate engagement. We conducted an iterative usability study based on a mixed-methods approach, combining cognitive and semistructured interviews as well as questionnaires, prior to program launch. Two separate rounds of usability studies were completed, testing a total of 9 potential users. Thematic analysis and descriptive statistics were used to analyze the think-aloud tasks, interviews, and questionnaires. Participants were satisfied with the overall usability of the program. The average usability score was 77.5/100 for the first test round and improved to 83.1/100 after applying modifications for the second iteration. The analysis of the qualitative data revealed five central themes: layout, navigation, content, support, and engagement conditions. The first three themes highlight usability aspects of the program, while the latter two highlight engagement issues. An easy-to-use format, clear wording, the nature of guidance, and opportunity for interactivity were important issues related to usability. The coach support, time investment, and severity of users' symptoms, the program's features and effectiveness, trust, anonymity, and affordability were relevant to engagement. This study identified salient usability and engagement features associated with participant motivation to use the Healthy Body Image Program and ultimately helped improve the program prior to its implementation. This research demonstrates that improvements in usability and engagement can be achieved by testing and adjusting intervention design and content prior to program launch. The results are consistent with related research and reinforce the need for further research to identify usage patterns and effective means for reducing dropout. Digital health research should include usability studies prior to efficacy trials to help create more user-friendly programs that have a higher likelihood of "real-world" adoption.

  19. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  20. Iterative Importance Sampling Algorithms for Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less

  1. An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models

    NASA Astrophysics Data System (ADS)

    Dukkipati, Ambedkar; Manathara, Joel George

    In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.

  2. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  3. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  4. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  5. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  6. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  7. Achievements in the development of the Water Cooled Solid Breeder Test Blanket Module of Japan to the milestones for installation in ITER

    NASA Astrophysics Data System (ADS)

    Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato

    2009-06-01

    As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.

  8. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  9. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  10. Pilot-testing an adverse drug event reporting form prior to its implementation in an electronic health record.

    PubMed

    Chruscicki, Adam; Badke, Katherin; Peddie, David; Small, Serena; Balka, Ellen; Hohl, Corinne M

    2016-01-01

    Adverse drug events (ADEs), harmful unintended consequences of medication use, are a leading cause of hospital admissions, yet are rarely documented in a structured format between care providers. We describe pilot-testing structured ADE documentation fields prior to integration into an electronic medical record (EMR). We completed a qualitative study at two Canadian hospitals. Using data derived from a systematic review of the literature, we developed screen mock-ups for an ADE reporting platform, iteratively revised in participatory workshops with diverse end-user groups. We designed a paper-based form reflecting the data elements contained in the mock-ups. We distributed them to a convenience sample of clinical pharmacists, and completed ethnographic workplace observations while the forms were used. We reviewed completed forms, collected feedback from pharmacists using semi-structured interviews, and coded the data in NVivo for themes related to the ADE form. We completed 25 h of clinical observations, and 24 ADEs were documented. Pharmacists perceived the form as simple and clear, with sufficient detail to capture ADEs. They identified fields for omission, and others requiring more detail. Pharmacists encountered barriers to documenting ADEs including uncertainty about what constituted a reportable ADE, inability to complete patient follow-up, the need for inter-professional communication to rule out alternative diagnoses, and concern about creating a permanent record. Paper-based pilot-testing allowed planning for important modifications in an ADE documentation form prior to implementation in an EMR. While paper-based piloting is rarely reported prior to EMR implementations, it can inform design and enhance functionality. Piloting with other groups of care providers and in different healthcare settings will likely lead to further revisions prior to broader implementations.

  11. A Technique for Transient Thermal Testing of Thick Structures

    NASA Technical Reports Server (NTRS)

    Horn, Thomas J.; Richards, W. Lance; Gong, Leslie

    1997-01-01

    A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.

  12. In silico model-based inference: a contemporary approach for hypothesis testing in network biology

    PubMed Central

    Klinke, David J.

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900’s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. PMID:25139179

  13. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    PubMed

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  14. An Iterative Learning Algorithm to Map Oil Palm Plantations from Synthetic Aperture Radar and Crowdsourcing

    NASA Astrophysics Data System (ADS)

    Pinto, N.; Zhang, Z.; Perger, C.; Aguilar-Amuchastegui, N.; Almeyda Zambrano, A. M.; Broadbent, E. N.; Simard, M.; Banerjee, S.

    2017-12-01

    The oil palm Elaeis spp. grows exclusively in the tropics and provides 30% of the world's vegetable oil. While oil palm-derived biodiesel can reduce carbon emissions from fossil fuels, plantation establishment may be associated with peat fires and deforestation. The ability to monitor plantation establishment and their expansion over carbon-rich tropical forests is critical for quantifying the net impact of oil palm commodities on carbon fluxes. Our objective is to develop a robust methodology to map oil palm plantations in tropical biomes, based on Synthetic Aperture Radar (SAR) from Sentinel-1, ALOS/PALSAR2, and UAVSAR. The C- and L-band signal from these instruments are sensitive to vegetation parameters such as canopy volume, trunk shape, and trunk spatial arrangement, that are critical to differentiate crops from forests and native palms. Based on Bayesian statistics, the learning algorithm employed here adapts to growing knowledge as sites and trainning points are added. We will present an iterative approach wherein a model is initially built at the site with the most training points - in our case, Costa Rica. Model posteriors from Costa Rica, depicting polarimetric signatures of oil palm plantations, are then used as priors in a classification exercise taking place in South Kalimantan. Results are evaluated by local researchers using the LACO Wiki interface. All validation points, including missclassified sites, are used in an additional iteration to improve model results to >90% overall accuracy. We report on the impact of plantation age on polarimetric signatures, and we also compare model performance with and without L-band data.

  15. Feasibility of a low-dose orbital CT protocol with a knowledge-based iterative model reconstruction algorithm for evaluating Graves' orbitopathy.

    PubMed

    Lee, Ho-Joon; Kim, Jinna; Kim, Ki Wook; Lee, Seung-Koo; Yoon, Jin Sook

    2018-06-23

    To evaluate the clinical feasibility of low-dose orbital CT with a knowledge-based iterative model reconstruction (IMR) algorithm for evaluating Graves' orbitopathy. Low-dose orbital CT was performed with a CTDI vol of 4.4 mGy. In 12 patients for whom prior or subsequent non-low-dose orbital CT data obtained within 12 months were available, background noise, SNR, and CNR were compared for images generated using filtered back projection (FBP), hybrid iterative reconstruction (iDose 4 ), and IMR and non-low-dose CT images. Comparison of clinically relevant measurements for Graves' orbitopathy, such as rectus muscle thickness and retrobulbar fat area, was performed in a subset of 6 patients who underwent CT for causes other than Graves' orbitopathy, by using the Wilcoxon signed-rank test. The lens dose estimated from skin dosimetry on a phantom was 4.13 mGy, which was on average 59.34% lower than that of the non-low-dose protocols. Image quality in terms of background noise, SNR, and CNR was the best for IMR, followed by non-low-dose CT, iDose 4 , and FBP, in descending order. A comparison of clinically relevant measurements revealed no significant difference in the retrobulbar fat area and the inferior and medial rectus muscle thicknesses between the low-dose and non-low-dose CT images. Low-dose CT with IMR may be performed without significantly affecting the measurement of prognostic parameters for Graves' orbitopathy while lowering the lens dose and image noise. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Low-dose 4D cardiac imaging in small animals using dual source micro-CT

    NASA Astrophysics Data System (ADS)

    Holbrook, M.; Clark, D. P.; Badea, C. T.

    2018-01-01

    Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D  +  Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.

  17. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  18. Nonrigid liver registration for image-guided surgery using partial surface data: a novel iterative approach

    NASA Astrophysics Data System (ADS)

    Rucker, D. Caleb; Wu, Yifei; Ondrake, Janet E.; Pheiffer, Thomas S.; Simpson, Amber L.; Miga, Michael I.

    2013-03-01

    In the context of open abdominal image-guided liver surgery, the efficacy of an image-guidance system relies on its ability to (1) accurately depict tool locations with respect to the anatomy, and (2) maintain the work flow of the surgical team. Laser-range scanned (LRS) partial surface measurements can be taken intraoperatively with relatively little impact on the surgical work flow, as opposed to other intraoperative imaging modalities. Previous research has demonstrated that this kind of partial surface data may be (1) used to drive a rigid registration of the preoperative CT image volume to intraoperative patient space, and (2) extrapolated and combined with a tissue-mechanics-based organ model to drive a non-rigid registration, thus compensating for organ deformations. In this paper we present a novel approach for intraoperative nonrigid liver registration which iteratively reconstructs a displacement field on the posterior side of the organ in order to minimize the error between the deformed model and the intraopreative surface data. Experimental results with a phantom liver undergoing large deformations demonstrate that this method achieves target registration errors (TRE) with a mean of 4.0 mm in the prediction of a set of 58 locations inside the phantom, which represents a 50% improvement over rigid registration alone, and a 44% improvement over the prior non-iterative single-solve method of extrapolating boundary conditions via a surface Laplacian.

  19. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  20. Iterative Tensor Voting for Perceptual Grouping of Ill-Defined Curvilinear Structures: Application to Adherens Junctions

    PubMed Central

    Loss, Leandro A.; Bebis, George; Parvin, Bahram

    2012-01-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is applied to delineation of adherens junctions imaged through fluorescence microscopy. This class of membrane-bound macromolecules maintains tissue structural integrity and cell-cell interactions. Visually, it exhibits fibrous patterns that may be diffused, punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates. PMID:21421432

  1. Shape regularized active contour based on dynamic programming for anatomical structure segmentation

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra

    2005-04-01

    We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.

  2. Citizen science in natural resources: Lessons learned from stakeholder engagement in participatory research using collaborative adaptive management

    USDA-ARS?s Scientific Manuscript database

    Under the traditional “loading-dock” model of research, stakeholders are involved in determining priorities prior to research activities and then recieve one-way communication about findings after research is completed. This approach lacks iterative engagement of stakeholders during the research pro...

  3. "Healthy People": A 2020 Vision for the Social Determinants Approach

    ERIC Educational Resources Information Center

    Koh, Howard K.; Piotrowski, Julie J.; Kumanyika, Shiriki; Fielding, Jonathan E.

    2011-01-01

    For the past three decades, the "Healthy People" initiative has represented an ambitious yet achievable health promotion and disease prevention agenda for the nation. The recently released fourth version--"Healthy People 2020"--builds on the foundations of prior iterations while newly embracing and elevating a comprehensive "social determinants"…

  4. ITER Central Solenoid Module Fabrication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, John

    The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less

  5. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang

    2015-12-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  6. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less

  7. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  8. Subglacial sedimentary basin characterization of Wilkes Land, East Antarctica via applied aerogeophysical inverse methods

    NASA Astrophysics Data System (ADS)

    Frederick, B. C.; Gooch, B. T.; Richter, T.; Young, D. A.; Blankenship, D. D.; Aitken, A.; Siegert, M. J.

    2013-12-01

    Topography, sediment distribution and heat flux are all key boundary conditions governing the stability of the East Antarctic ice sheet (EAIS). Recent scientific scrutiny has been focused on several large, deep, interior EAIS basins including the submarine basal topography characterizing the Aurora Subglacial Basin (ASB). Numerical ice sheet models require accurate deformable sediment distribution and lithologic character constraints to estimate overall flow velocities and potential instability. To date, such estimates across the ASB have been derived from low-resolution satellite data or historic aerogeophysical surveys conducted prior to the advent of GPS. These rough basal condition estimates have led to poorly-constrained ice sheet stability models for this remote 200,000 sq km expanse of the ASB. Here we present a significantly improved quantitative model characterizing the subglacial lithology and sediment in the ASB region. The product of comprehensive ICECAP (2008-2013) aerogeophysical data processing, this sedimentary basin model details the expanse and thickness of probable Wilkes Land subglacial sedimentary deposits and density contrast boundaries indicative of distinct subglacial lithologic units. As part of the process, BEDMAP2 subglacial topographic results were improved through the additional incorporation of ice-penetrating radar data collected during ICECAP field seasons 2010-2013. Detailed potential field data pre-processing was completed as well as a comprehensive evaluation of crustal density contrasts based on the gravity power spectrum, a subsequent high pass data filter was also applied to remove longer crustal wavelengths from the gravity dataset prior to inversion. Gridded BEDMAP2+ ice and bed radar surfaces were then utilized to establish bounding density models for the 3D gravity inversion process to yield probable sedimentary basin anomalies. Gravity inversion results were iteratively evaluated against radar along-track RMS deviation and gravity and magnetic depth to basement results. This geophysical data processing methodology provides a substantial improvement over prior Wilkes Land sedimentary basin estimates yielding a higher resolution model based upon iteration of several aerogeophysical datasets concurrently. This more detailed subglacial sedimentary basin model for Wilkes Land, East Antarctica will not only contribute to vast improvements on EAIS ice sheet model constraints, but will also provide significant quantifiable controls for subglacial hydrologic and geothermal flux estimates that are also sizable contributors to the cold-based, deep interior basal ice dynamics dominant in the Wilkes Land region.

  9. A Constrained Scheme for High Precision Downward Continuation of Potential Field Data

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Zhou, Zhiwen

    2018-04-01

    To further improve the accuracy of the downward continuation of potential field data, we present a novel constrained scheme in this paper combining the ideas of the truncated Taylor series expansion, the principal component analysis, the iterative continuation and the prior constraint. In the scheme, the initial downward continued field on the target plane is obtained from the original measured field using the truncated Taylor series expansion method. If the original field was with particularly low signal-to-noise ratio, the principal component analysis is utilized to suppress the noise influence. Then, the downward continued field is upward continued to the plane of the prior information. If the prior information was on the target plane, it should be upward continued over a short distance to get the updated prior information. Next, the difference between the calculated field and the updated prior information is calculated. The cosine attenuation function is adopted to get the scope of constraint and the corresponding modification item. Afterward, a correction is performed on the downward continued field on the target plane by adding the modification item. The correction process is iteratively repeated until the difference meets the convergence condition. The accuracy of the proposed constrained scheme is tested on synthetic data with and without noise. Numerous model tests demonstrate that downward continuation using the constrained strategy can yield more precise results compared to other downward continuation methods without constraints and is relatively insensitive to noise even for downward continuation over a large distance. Finally, the proposed scheme is applied to real magnetic data collected within the Dapai polymetallic deposit from the Fujian province in South China. This practical application also indicates the superiority of the presented scheme.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Yifei; Zuo, Jian -Min

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less

  11. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  12. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  13. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  14. Fully iterative scatter corrected digital breast tomosynthesis using GPU-based fast Monte Carlo simulation and composition ratio update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon

    2015-09-15

    Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less

  15. Sentence-Level Rewriting Detection

    ERIC Educational Resources Information Center

    Zhang, Fan; Litman, Diane

    2014-01-01

    Writers usually need iterations of revisions and edits during their writings. To better understand the process of rewriting, we need to know what has changed be-tween the revisions. Prior work mainly focuses on detecting corrections within sentences, which is at the level of words or phrases. This paper proposes to detect revision changes at the…

  16. Iterative Region-of-Interest Reconstruction from Limited Data Using Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2017-12-01

    In practice, computed tomography and computed laminography applications suffer from incomplete data. In particular, when inspecting large objects with extremely different diameters in longitudinal and transversal directions or when high resolution reconstructions are desired, the physical conditions of the scanning system lead to restricted data and truncated projections, also known as the interior or region-of-interest (ROI) problem. To recover the searched-for density function of the inspected object, we derive a semi-discrete model of the ROI problem that inherently allows the incorporation of geometrical prior information in an abstract Hilbert space setting for bounded linear operators. Assuming that the attenuation inside the object is approximately constant, as for fibre reinforced plastics parts or homogeneous objects where one is interested in locating defects like cracks or porosities, we apply the semi-discrete Landweber-Kaczmarz method to recover the inner structure of the object inside the ROI from the measured data resulting in a semi-discrete iteration method. Finally, numerical experiments for three-dimensional tomographic applications with both an inherent restricted source and ROI problem are provided to verify the proposed method for the ROI reconstruction.

  17. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  18. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  19. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  20. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  1. Segmentation and tracking of lung nodules via graph-cuts incorporating shape prior and motion from 4D CT.

    PubMed

    Cha, Jungwon; Farhangi, Mohammad Mehdi; Dunlap, Neal; Amini, Amir A

    2018-01-01

    We have developed a robust tool for performing volumetric and temporal analysis of nodules from respiratory gated four-dimensional (4D) CT. The method could prove useful in IMRT of lung cancer. We modified the conventional graph-cuts method by adding an adaptive shape prior as well as motion information within a signed distance function representation to permit more accurate and automated segmentation and tracking of lung nodules in 4D CT data. Active shape models (ASM) with signed distance function were used to capture the shape prior information, preventing unwanted surrounding tissues from becoming part of the segmented object. The optical flow method was used to estimate the local motion and to extend three-dimensional (3D) segmentation to 4D by warping a prior shape model through time. The algorithm has been applied to segmentation of well-circumscribed, vascularized, and juxtapleural lung nodules from respiratory gated CT data. In all cases, 4D segmentation and tracking for five phases of high-resolution CT data took approximately 10 min on a PC workstation with AMD Phenom II and 32 GB of memory. The method was trained based on 500 breath-held 3D CT data from the LIDC data base and was tested on 17 4D lung nodule CT datasets consisting of 85 volumetric frames. The validation tests resulted in an average Dice Similarity Coefficient (DSC) = 0.68 for all test data. An important by-product of the method is quantitative volume measurement from 4D CT from end-inspiration to end-expiration which will also have important diagnostic value. The algorithm performs robust segmentation of lung nodules from 4D CT data. Signed distance ASM provides the shape prior information which based on the iterative graph-cuts framework is adaptively refined to best fit the input data, preventing unwanted surrounding tissue from merging with the segmented object. © 2017 American Association of Physicists in Medicine.

  2. SU-F-I-08: CT Image Ring Artifact Reduction Based On Prior Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, C; Qi, H; Chen, Z

    Purpose: In computed tomography (CT) system, CT images with ring artifacts will be reconstructed when some adjacent bins of detector don’t work. The ring artifacts severely degrade CT image quality. We present a useful CT ring artifacts reduction based on projection data correction, aiming at estimating the missing data of projection data accurately, thus removing the ring artifacts of CT images. Methods: The method consists of ten steps: 1) Identification of abnormal pixel line in projection sinogram; 2) Linear interpolation within the pixel line of projection sinogram; 3) FBP reconstruction using interpolated projection data; 4) Filtering FBP image using meanmore » filter; 5) Forwarding projection of filtered FBP image; 6) Subtraction forwarded projection from original projection; 7) Linear interpolation of abnormal pixel line area in the subtraction projection; 8) Adding the interpolated subtraction projection on the forwarded projection; 9) FBP reconstruction using corrected projection data; 10) Return to step 4 until the pre-set iteration number is reached. The method is validated on simulated and real data to restore missing projection data and reconstruct ring artifact-free CT images. Results: We have studied impact of amount of dead bins of CT detector on the accuracy of missing data estimation in projection sinogram. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, three iterations are sufficient to restore projection data and reconstruct ring artifact-free images when the dead bins rating is under 30%. The dead-bin-induced artifacts are substantially reduced. More iteration number is needed to reconstruct satisfactory images while the rating of dead bins increases. Similar results were found for a real head phantom case. Conclusion: A practical CT image ring artifact correction scheme based on projection data is developed. This method can produce ring artifact-free CT images feasibly and effectively.« less

  3. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  4. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  5. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.

  6. ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds

    PubMed Central

    Naveja, J. Jesús; Medina-Franco, José L.

    2017-01-01

    We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints’ similarity. The method uses a ‘satellites’ approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets. PMID:28794856

  7. ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds.

    PubMed

    Naveja, J Jesús; Medina-Franco, José L

    2017-01-01

    We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints' similarity. The method uses a 'satellites' approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it seems to be more relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note represents a first exploratory step, prior to the full application of this method for several datasets.

  8. MR-Consistent Simultaneous Reconstruction of Attenuation and Activity for Non-TOF PET/MR

    NASA Astrophysics Data System (ADS)

    Heußer, Thorsten; Rank, Christopher M.; Freitag, Martin T.; Dimitrakopoulou-Strauss, Antonia; Schlemmer, Heinz-Peter; Beyer, Thomas; Kachelrieß, Marc

    2016-10-01

    Attenuation correction (AC) is required for accurate quantification of the reconstructed activity distribution in positron emission tomography (PET). For simultaneous PET/magnetic resonance (MR), however, AC is challenging, since the MR images do not provide direct information on the attenuating properties of the underlying tissue. Standard MR-based AC does not account for the presence of bone and thus leads to an underestimation of the activity distribution. To improve quantification for non-time-of-flight PET/MR, we propose an algorithm which simultaneously reconstructs activity and attenuation distribution from the PET emission data using available MR images as anatomical prior information. The MR information is used to derive voxel-dependent expectations on the attenuation coefficients. The expectations are modeled using Gaussian-like probability functions. An iterative reconstruction scheme incorporating the prior information on the attenuation coefficients is used to update attenuation and activity distribution in an alternating manner. We tested and evaluated the proposed algorithm for simulated 3D PET data of the head and the pelvis region. Activity deviations were below 5% in soft tissue and lesions compared to the ground truth whereas standard MR-based AC resulted in activity underestimation values of up to 12%.

  9. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  10. Maintaining Web-based Bibliographies: A Case Study of Iter, the Bibliography of Renaissance Europe.

    ERIC Educational Resources Information Center

    Castell, Tracy

    1997-01-01

    Introduces Iter, a nonprofit research project developed for the World Wide Web and dedicated to increasing access to all published materials pertaining to the Renaissance and, eventually, the Middle Ages. Discusses information management issues related to building and maintaining Iter's first Web-based bibliography, focusing on printed secondary…

  11. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less

  12. Charm: Cosmic history agnostic reconstruction method

    NASA Astrophysics Data System (ADS)

    Porqueres, Natalia; Ensslin, Torsten A.

    2017-03-01

    Charm (cosmic history agnostic reconstruction method) reconstructs the cosmic expansion history in the framework of Information Field Theory. The reconstruction is performed via the iterative Wiener filter from an agnostic or from an informative prior. The charm code allows one to test the compatibility of several different data sets with the LambdaCDM model in a non-parametric way.

  13. The Grades That Clinical Teachers Give Students Modifies the Grades They Receive

    ERIC Educational Resources Information Center

    Paget, Michael; Brar, Gurbir; Veale, Pamela; Busche, Kevin; Coderre, Sylvain; Woloschuk, Wayne; McLaughlin, Kevin

    2018-01-01

    Prior studies have shown a correlation between the grades students receive and how they rate their teacher in the classroom. In this study, the authors probe this association on clinical rotations and explore potential mechanisms. All In-Training Evaluation Reports (ITERs) for students on mandatory clerkship rotations from April 1, 2013 to January…

  14. Three-dimensional nanostructure determination from a large diffraction data set recorded using scanning electron nanodiffraction.

    PubMed

    Meng, Yifei; Zuo, Jian-Min

    2016-09-01

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can be extended to multiphase nanocrystalline materials as well. Thus, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.

  15. Ligand placement based on prior structures: the guided ligand-replacement method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klei, Herbert E.; Bristol-Myers Squibb, Princeton, NJ 08543-4000; Moriarty, Nigel W., E-mail: nwmoriarty@lbl.gov

    2014-01-01

    A new module, Guided Ligand Replacement (GLR), has been developed in Phenix to increase the ease and success rate of ligand placement when prior protein-ligand complexes are available. The process of iterative structure-based drug design involves the X-ray crystal structure determination of upwards of 100 ligands with the same general scaffold (i.e. chemotype) complexed with very similar, if not identical, protein targets. In conjunction with insights from computational models and assays, this collection of crystal structures is analyzed to improve potency, to achieve better selectivity and to reduce liabilities such as absorption, distribution, metabolism, excretion and toxicology. Current methods formore » modeling ligands into electron-density maps typically do not utilize information on how similar ligands bound in related structures. Even if the electron density is of sufficient quality and resolution to allow de novo placement, the process can take considerable time as the size, complexity and torsional degrees of freedom of the ligands increase. A new module, Guided Ligand Replacement (GLR), was developed in Phenix to increase the ease and success rate of ligand placement when prior protein–ligand complexes are available. At the heart of GLR is an algorithm based on graph theory that associates atoms in the target ligand with analogous atoms in the reference ligand. Based on this correspondence, a set of coordinates is generated for the target ligand. GLR is especially useful in two situations: (i) modeling a series of large, flexible, complicated or macrocyclic ligands in successive structures and (ii) modeling ligands as part of a refinement pipeline that can automatically select a reference structure. Even in those cases for which no reference structure is available, if there are multiple copies of the bound ligand per asymmetric unit GLR offers an efficient way to complete the model after the first ligand has been placed. In all of these applications, GLR leverages prior knowledge from earlier structures to facilitate ligand placement in the current structure.« less

  16. An iterative method for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Goldstein, C. I.; Turkel, E.

    1983-01-01

    An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.

  17. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  18. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs.

    PubMed

    Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan

    2017-12-13

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.

  19. SACFIR: SDN-Based Application-Aware Centralized Adaptive Flow Iterative Reconfiguring Routing Protocol for WSNs

    PubMed Central

    Hu, Xiaopeng; Wang, Fan

    2017-01-01

    Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031

  20. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    PubMed

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  1. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.

  2. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  3. MO-DE-207A-08: Four-Dimensional Cone-Beam CT Iterative Reconstruction with Time-Ordered Chain Graph Model for Non-Periodic Organ Motion and Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, M; Haga, A; Hanaoka, S

    2016-06-15

    Purpose: The purpose of this study is to propose a new concept of four-dimensional (4D) cone-beam CT (CBCT) reconstruction for non-periodic organ motion using the Time-ordered Chain Graph Model (TCGM), and to compare the reconstructed results with the previously proposed methods, the total variation-based compressed sensing (TVCS) and prior-image constrained compressed sensing (PICCS). Methods: CBCT reconstruction method introduced in this study consisted of maximum a posteriori (MAP) iterative reconstruction combined with a regularization term derived from a concept of TCGM, which includes a constraint coming from the images of neighbouring time-phases. The time-ordered image series were concurrently reconstructed in themore » MAP iterative reconstruction framework. Angular range of projections for each time-phase was 90 degrees for TCGM and PICCS, and 200 degrees for TVCS. Two kinds of projection data, an elliptic-cylindrical digital phantom data and two clinical patients’ data, were used for reconstruction. The digital phantom contained an air sphere moving 3 cm along longitudinal axis, and temporal resolution of each method was evaluated by measuring the penumbral width of reconstructed moving air sphere. The clinical feasibility of non-periodic time-ordered 4D CBCT reconstruction was also examined using projection data of prostate cancer patients. Results: The results of reconstructed digital phantom shows that the penumbral widths of TCGM yielded the narrowest result; PICCS and TCGM were 10.6% and 17.4% narrower than that of TVCS, respectively. This suggests that the TCGM has the better temporal resolution than the others. Patients’ CBCT projection data were also reconstructed and all three reconstructed results showed motion of rectal gas and stool. The result of TCGM provided visually clearer and less blurring images. Conclusion: The present study demonstrates that the new concept for 4D CBCT reconstruction, TCGM, combined with MAP iterative reconstruction framework enables time-ordered image reconstruction with narrower time-window.« less

  4. Design of the ITER Electron Cyclotron Heating and Current Drive Waveguide Transmission Line

    NASA Astrophysics Data System (ADS)

    Bigelow, T. S.; Rasmussen, D. A.; Shapiro, M. A.; Sirigiri, J. R.; Temkin, R. J.; Grunloh, H.; Koliner, J.

    2007-11-01

    The ITER ECH transmission line system is designed to deliver the power, from twenty-four 1 MW 170 GHz gyrotrons and three 1 MW 127.5 GHz gyrotrons, to the equatorial and upper launchers. The performance requirements, initial design of components and layout between the gyrotrons and the launchers is underway. Similar 63.5 mm ID corrugated waveguide systems have been built and installed on several fusion experiments; however, none have operated at the high frequency and long-pulse required for ITER. Prototype components are being tested at low power to estimate ohmic and mode conversion losses. In order to develop and qualify the ITER components prior to procurement of the full set of 24 transmission lines, a 170 GHz high power test of a complete prototype transmission line is planned. Testing of the transmission line at 1-2 MW can be performed with a modest power (˜0.5 MW) tube with a low loss (10-20%) resonant ring configuration. A 140 GHz long pulse, 400 kW gyrotron will be used in the initial tests and a 170 GHz gyrotron will be used when it becomes available. Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Dept. of Energy under contract DE-AC05-00OR22725.

  5. Low-rank Atlas Image Analyses in the Presence of Pathologies

    PubMed Central

    Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen

    2015-01-01

    We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390

  6. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  7. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  8. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  9. Evidence-based recommendations for bowel cleansing before colonoscopy in children: a report from a national working group.

    PubMed

    Turner, D; Levine, A; Weiss, B; Hirsh, A; Shamir, R; Shaoul, R; Berkowitz, D; Bujanover, Y; Cohen, S; Eshach-Adiv, O; Jamal, Gera; Kori, M; Lerner, A; On, A; Rachman, L; Rosenbach, Y; Shamaly, H; Shteyer, E; Silbermintz, A; Yerushalmi, B

    2010-12-01

    There are no current recommendations for bowel cleansing before colonoscopy in children. The Israeli Society of Pediatric Gastroenterology and Nutrition (ISPGAN) established an iterative working group to formulate evidence-based guidelines for bowel cleansing in children prior to colonoscopy. Data were collected by systematic review of the literature and via a national-based survey of all endoscopy units in Israel. Based on the strength of evidence, the Committee reached consensus on six recommended protocols in children. Guidelines were finalized after an open audit of ISPGAN members. Data on 900 colonoscopies per year were accrued, which represents all annual pediatric colonoscopies performed in Israel. Based on the literature review, the national survey, and the open audit, several age-stratified pediatric cleansing protocols were proposed: two PEG-ELS protocols (polyethylene-glycol with electrolyte solution); Picolax-based protocol (sodium picosulphate with magnesium citrate); sodium phosphate protocol (only in children over the age of 12 years who are at low risk for renal damage); stimulant laxative-based protocol (e. g. bisacodyl); and a PEG 3350-based protocol. A population-based analysis estimated that the acute toxicity rate of oral sodium phosphate is at most 3/7320 colonoscopies (0.041 %). Recommendations on diet and enema use are provided in relation to each proposed protocol. There is no ideal bowel cleansing regimen and, thus, various protocols are in use. We propose several evidence-based protocols to optimize bowel cleansing in children prior to colonoscopy and minimize adverse events. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  11. Joint image restoration and location in visual navigation system

    NASA Astrophysics Data System (ADS)

    Wu, Yuefeng; Sang, Nong; Lin, Wei; Shao, Yuanjie

    2018-02-01

    Image location methods are the key technologies of visual navigation, most previous image location methods simply assume the ideal inputs without taking into account the real-world degradations (e.g. low resolution and blur). In view of such degradations, the conventional image location methods first perform image restoration and then match the restored image on the reference image. However, the defective output of the image restoration can affect the result of localization, by dealing with the restoration and location separately. In this paper, we present a joint image restoration and location (JRL) method, which utilizes the sparse representation prior to handle the challenging problem of low-quality image location. The sparse representation prior states that the degraded input image, if correctly restored, will have a good sparse representation in terms of the dictionary constructed from the reference image. By iteratively solving the image restoration in pursuit of the sparest representation, our method can achieve simultaneous restoration and location. Based on such a sparse representation prior, we demonstrate that the image restoration task and the location task can benefit greatly from each other. Extensive experiments on real scene images with Gaussian blur are carried out and our joint model outperforms the conventional methods of treating the two tasks independently.

  12. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  13. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  14. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  15. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  16. SU-F-BRCD-09: Total Variation (TV) Based Fast Convergent Iterative CBCT Reconstruction with GPU Acceleration.

    PubMed

    Xu, Q; Yang, D; Tan, J; Anastasio, M

    2012-06-01

    To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.

  17. Three-dimensional nanostructure determination from a large diffraction data set recorded using scanning electron nanodiffraction

    DOE PAGES

    Meng, Yifei; Zuo, Jian -Min

    2016-07-04

    A diffraction-based technique is developed for the determination of three-dimensional nanostructures. The technique employs high-resolution and low-dose scanning electron nanodiffraction (SEND) to acquire three-dimensional diffraction patterns, with the help of a special sample holder for large-angle rotation. Grains are identified in three-dimensional space based on crystal orientation and on reconstructed dark-field images from the recorded diffraction patterns. Application to a nanocrystalline TiN thin film shows that the three-dimensional morphology of columnar TiN grains of tens of nanometres in diameter can be reconstructed using an algebraic iterative algorithm under specified prior conditions, together with their crystallographic orientations. The principles can bemore » extended to multiphase nanocrystalline materials as well. Furthermore, the tomographic SEND technique provides an effective and adaptive way of determining three-dimensional nanostructures.« less

  18. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  19. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  20. Qualification of a cyanate ester epoxy blend supplied by Japanese industry for the ITER TF coil insulation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.; Knaster, J.; Savary, F.

    2012-06-01

    During the last years, two cyanate ester epoxy blends supplied by European and US industry have been successfully qualified for the ITER TF coil insulation. The results of the qualification of a third CE blend supplied by Industrial Summit Technology (IST, Japan) will be presented in this paper. Sets of test samples were fabricated exactly under the same conditions as used before. The reinforcement of the composite consists of wrapped R-glass / polyimide tapes, which are vacuum pressure impregnated with the resin. The mechanical properties of this material were characterized prior to and after reactor irradiation to a fast neutron fluence of 2×1022m-2 (E>0.1 MeV), i.e. twice the ITER design fluence. Static and dynamic tensile as well as static short beam shear tests were carried out at 77 K. In addition, stress strain relations were recorded to determine the Young's modulus at room temperature and at 77 K. The results are compared in detail with the previously qualified materials from other suppliers.

  1. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  2. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  3. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  4. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  5. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.

  6. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  7. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  8. WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Niu, T

    2015-06-15

    Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  9. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  10. High-Value, Cost-Conscious Care: Iterative Systems-Based Interventions to Reduce Unnecessary Laboratory Testing.

    PubMed

    Sadowski, Brett W; Lane, Alison B; Wood, Shannon M; Robinson, Sara L; Kim, Chin Hee

    2017-09-01

    Inappropriate testing contributes to soaring healthcare costs within the United States, and teaching hospitals are vulnerable to providing care largely for academic development. Via its "Choosing Wisely" campaign, the American Board of Internal Medicine recommends avoiding repetitive testing for stable inpatients. We designed systems-based interventions to reduce laboratory orders for patients admitted to the wards at an academic facility. We identified the computer-based order entry system as an appropriate target for sustainable intervention. The admission order set had allowed multiple routine tests to be ordered repetitively each day. Our iterative study included interventions on the automated order set and cost displays at order entry. The primary outcome was number of routine tests controlled for inpatient days compared with the preceding year. Secondary outcomes included cost savings, delays in care, and adverse events. Data were collected over a 2-month period following interventions in sequential years and compared with the year prior. The first intervention led to 0.97 fewer laboratory tests per inpatient day (19.4%). The second intervention led to sustained reduction, although by less of a margin than order set modifications alone (15.3%). When extrapolating the results utilizing fees from the Centers for Medicare and Medicaid Services, there was a cost savings of $290,000 over 2 years. Qualitative survey data did not suggest an increase in care delays or near-miss events. This series of interventions targeting unnecessary testing demonstrated a sustained reduction in the number of routine tests ordered, without adverse effects on clinical care. Published by Elsevier Inc.

  11. Thermal release of D2 from new Be-D co-deposits on previously baked co-deposits

    NASA Astrophysics Data System (ADS)

    Baldwin, M. J.; Doerner, R. P.

    2015-12-01

    Past experiments and modeling with the TMAP code in [1, 2] indicated that Be-D co-deposited layers are less (time-wise) efficiently desorbed of retained D in a fixed low-temperature bake, as the layer grows in thickness. In ITER, beryllium rich co-deposited layers will grow in thickness over the life of the machine. Although, compared with the analyses in [1, 2], ITER presents a slightly different bake efficiency problem because of instances of prior tritium recover/control baking. More relevant to ITER, is the thermal release from a new and saturated co-deposit layer in contact with a thickness of previously-baked, less-saturated, co-deposit. Experiments that examine the desorption of saturated co-deposited over-layers in contact with previously baked under-layers are reported and comparison is made to layers of the same combined thickness. Deposition temperatures of ∼323 K and ∼373 K are explored. It is found that an instance of prior bake leads to a subtle effect on the under-layer. The effect causes the thermal desorption of the new saturated over-layer to deviate from the prediction of the validated TMAP model in [2]. Instead of the D thermal release reflecting the combined thickness and levels of D saturation in the over and under layer, experiment differs in that, i) the desorption is a fractional superposition of desorption from the saturated over-layer, with ii) that of the combined over and under -layer thickness. The result is not easily modeled by TMAP without the incorporation of a thin BeO inter-layer which is confirmed experimentally on baked Be-D co-deposits using X-ray micro-analysis.

  12. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  13. Conceptual design of data acquisition and control system for two Rf driver based negative ion source for fusion R&D

    NASA Astrophysics Data System (ADS)

    Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.

    2013-02-01

    Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.

  14. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  15. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  16. Iterative channel decoding of FEC-based multiple-description codes.

    PubMed

    Chang, Seok-Ho; Cosman, Pamela C; Milstein, Laurence B

    2012-03-01

    Multiple description coding has been receiving attention as a robust transmission framework for multimedia services. This paper studies the iterative decoding of FEC-based multiple description codes. The proposed decoding algorithms take advantage of the error detection capability of Reed-Solomon (RS) erasure codes. The information of correctly decoded RS codewords is exploited to enhance the error correction capability of the Viterbi algorithm at the next iteration of decoding. In the proposed algorithm, an intradescription interleaver is synergistically combined with the iterative decoder. The interleaver does not affect the performance of noniterative decoding but greatly enhances the performance when the system is iteratively decoded. We also address the optimal allocation of RS parity symbols for unequal error protection. For the optimal allocation in iterative decoding, we derive mathematical equations from which the probability distributions of description erasures can be generated in a simple way. The performance of the algorithm is evaluated over an orthogonal frequency-division multiplexing system. The results show that the performance of the multiple description codes is significantly enhanced.

  17. US NDC Modernization Iteration E2 Prototyping Report: User Interface Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Jennifer E.; Palmer, Melanie A.; Vickers, James Wallace

    2014-12-01

    During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on Rich Client Platform (RCP) exploratory prototyping related to the User Interface Framework (UIF). The team also developed a survey of browser-based User Interface solutions and completed exploratory prototyping for selected solutions. This report presents the results of the browser-based UI survey, summarizes the E2 browser-based UI and RCP prototyping work, and outlines a path forward for the third iteration of the Elaboration phase (E3).

  18. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  19. Progress in extrapolating divertor heat fluxes towards large fusion devices

    NASA Astrophysics Data System (ADS)

    Sieglin, B.; Faitsch, M.; Eich, T.; Herrmann, A.; Suttrop, W.; Collaborators, JET; the MST1 Team; the ASDEX Upgrade Team

    2017-12-01

    Heat load to the plasma facing components is one of the major challenges for the development and design of large fusion devices such as ITER. Nowadays fusion experiments can operate with heat load mitigation techniques, e.g. sweeping, impurity seeding, but do not generally require it. For large fusion devices however, heat load mitigation will be essential. This paper presents the current progress of the extrapolation of steady state and transient heat loads towards large fusion devices. For transient heat loads, so-called edge localized modes are considered a serious issue for the lifetime of divertor components. In this paper, the ITER operation at half field (2.65 T) and half current (7.5 MA) will be discussed considering the current material limit for the divertor peak energy fluence of 0.5 {MJ}/{{{m}}}2. Recent studies were successful in describing the observed energy fluence in the JET, MAST and ASDEX Upgrade using the pedestal pressure prior to the ELM crash. Extrapolating this towards ITER results in a more benign heat load compared to previous scalings. In the presence of magnetic perturbation, the axisymmetry is broken and a 2D heat flux pattern is induced on the divertor target, leading to local increase of the heat flux which is a concern for ITER. It is shown that for a moderate divertor broadening S/{λ }{{q}}> 0.5 the toroidal peaking of the heat flux disappears.

  20. Region growing using superpixels with learned shape prior

    NASA Astrophysics Data System (ADS)

    Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro

    2017-11-01

    Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.

  1. Implicit theory manipulations affecting efficacy of a smartphone application aiding speech therapy for Parkinson's patients.

    PubMed

    Nolan, Peter; Hoskins, Sherria; Johnson, Julia; Powell, Vaughan; Chaudhuri, K Ray; Eglin, Roger

    2012-01-01

    A Smartphone speech-therapy application (STA) is being developed, intended for people with Parkinson's disease (PD) with reduced implicit volume cues. The STA offers visual volume feedback, addressing diminished auditory cues. Users are typically older adults, less familiar with new technology. Domain-specific implicit theories (ITs) have been shown to result in mastery or helpless behaviors. Studies manipulating participants' implicit theories of 'technology' (Study One), and 'ability to affect one's voice' (Study Two), were coordinated with iterative STA test-stages, using patients with PD with prior speech-therapist referrals. Across studies, findings suggest it is possible to manipulate patients' ITs related to engaging with a Smartphone STA. This potentially impacts initial application approach and overall effort using a technology-based therapy.

  2. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  3. Dynamic iterative beam hardening correction (DIBHC) in myocardial perfusion imaging using contrast-enhanced computed tomography.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Allmendinger, Thomas; Flohr, Thomas; Kachelrie, Marc

    2010-06-01

    In cardiac perfusion examinations with computed tomography (CT) large concentrations of iodine in the ventricle and in the descending aorta cause beam hardening artifacts that can lead to incorrect perfusion parameters. The aim of this study is to reduce these artifacts by performing an iterative correction and by accounting for the 3 materials soft tissue, bone, and iodine. Beam hardening corrections are either implemented as simple precorrections which cannot account for higher order beam hardening effects, or as iterative approaches that are based on segmenting the original image into material distribution images. Conventional segmentation algorithms fail to clearly distinguish between iodine and bone. Our new algorithm, DIBHC, calculates the time-dependent iodine distribution by analyzing the voxel changes of a cardiac perfusion examination (typically N approximately 15 electrocardiogram-correlated scans distributed over a total scan time up to T approximately 30 s). These voxel dynamics are due to changes in contrast agent. This prior information allows to precisely distinguish between bone and iodine and is key to DIBHC where each iteration consists of a multimaterial (soft tissue, bone, iodine) polychromatic forward projection, a raw data comparison and a filtered backprojection. Simulations with a semi-anthropomorphic dynamic phantom and clinical scans using a dual source CT scanner with 2 x 128 slices, a tube voltage of 100 kV, a tube current of 180 mAs, and a rotation time of 0.28 seconds have been carried out. The uncorrected images suffer from beam hardening artifacts that appear as dark bands connecting large concentrations of iodine in the ventricle, aorta, and bony structures. The CT-values of the affected tissue are usually underestimated by roughly 20 HU although deviations of up to 61 HU have been observed. For a quantitative evaluation circular regions of interest have been analyzed. After application of DIBHC the mean values obtained deviate by only 1 HU for the simulations and the corrected values show an increase of up to 61 HU for the measurements. One iteration of DIBHC greatly reduces the beam hardening artifacts induced by the contrast agent dynamics (and those due to bone) now allowing for an improved assessment of contrast agent uptake in the myocardium which is essential for determining myocardial perfusion.

  4. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  5. Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.

    1994-05-01

    The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.

  6. Optimal tracking control for a class of nonlinear discrete-time systems with time delays based on heuristic dynamic programming.

    PubMed

    Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan

    2011-12-01

    In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.

  7. Language Evolution by Iterated Learning with Bayesian Agents

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Kalish, Michael L.

    2007-01-01

    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…

  8. Design concept of a cryogenic distillation column cascade for a ITER scale fusion reactor

    NASA Astrophysics Data System (ADS)

    Yamanishi, Toshihiko; Enoeda, Mikio; Okuno, Kenji

    1994-07-01

    A column cascade has been proposed for the fuel cycle of a ITER scale fusion reactor. The proposed cascade consists of three columns and has significant features: either top or bottom product is prior to the other for each column; it is avoided to withdraw side streams as products or feeds of down stream columns; and there is no recycle steam between the columns. In addition, the product purity of the cascade can be maintained against the changes of flow rates and compositions of feed streams just by adjusting the top and bottom flow rates. The control system has been designed for each column in the cascade. A key component in the prior product stream was selected, and the analysis method of this key component was proposed. The designed control system never brings instability as long as the concentration of the key component is measured with negligible time lag. The time lag for the measurement considerably affects the stability of the control system. A significant conclusion by the simulation in this work is that permissible time for the measurement is about 0.5 hour to obtain stable control. Hence, the analysis system using the gas chromatography is valid for control of the columns.

  9. Real-time restoration of white-light confocal microscope optical sections

    PubMed Central

    Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter

    2009-01-01

    Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290

  10. Macromolecular Crystallization in Microfluidics for the International Space Station

    NASA Technical Reports Server (NTRS)

    Monaco, Lisa A.; Spearing, Scott

    2003-01-01

    At NASA's Marshall Space Flight Center, the Iterative Biological Crystallization (IBC) project has begun development on scientific hardware for macromolecular crystallization on the International Space Station (ISS). Currently ISS crystallization research is limited to solution recipes that were prepared on the ground prior to launch. The proposed hardware will conduct solution mixing and dispensing on board the ISS, be fully automated, and have imaging functions via remote commanding from the ground. Utilizing microfluidic technology, IBC will allow for on orbit iterations. The microfluidics LabChip(R) devices that have been developed, along with Caliper Technologies, will greatly benefit researchers by allowing for precise fluid handling of nano/pico liter sized volumes. IBC will maximize the amount of science return by utilizing the microfluidic approach and be a valuable tool to structural biologists investigating medically relevant projects.

  11. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  12. Integrating clinicians, knowledge and data: expert-based cooperative analysis in healthcare decision support

    PubMed Central

    2010-01-01

    Background Decision support in health systems is a highly difficult task, due to the inherent complexity of the process and structures involved. Method This paper introduces a new hybrid methodology Expert-based Cooperative Analysis (EbCA), which incorporates explicit prior expert knowledge in data analysis methods, and elicits implicit or tacit expert knowledge (IK) to improve decision support in healthcare systems. EbCA has been applied to two different case studies, showing its usability and versatility: 1) Bench-marking of small mental health areas based on technical efficiency estimated by EbCA-Data Envelopment Analysis (EbCA-DEA), and 2) Case-mix of schizophrenia based on functional dependency using Clustering Based on Rules (ClBR). In both cases comparisons towards classical procedures using qualitative explicit prior knowledge were made. Bayesian predictive validity measures were used for comparison with expert panels results. Overall agreement was tested by Intraclass Correlation Coefficient in case "1" and kappa in both cases. Results EbCA is a new methodology composed by 6 steps:. 1) Data collection and data preparation; 2) acquisition of "Prior Expert Knowledge" (PEK) and design of the "Prior Knowledge Base" (PKB); 3) PKB-guided analysis; 4) support-interpretation tools to evaluate results and detect inconsistencies (here Implicit Knowledg -IK- might be elicited); 5) incorporation of elicited IK in PKB and repeat till a satisfactory solution; 6) post-processing results for decision support. EbCA has been useful for incorporating PEK in two different analysis methods (DEA and Clustering), applied respectively to assess technical efficiency of small mental health areas and for case-mix of schizophrenia based on functional dependency. Differences in results obtained with classical approaches were mainly related to the IK which could be elicited by using EbCA and had major implications for the decision making in both cases. Discussion This paper presents EbCA and shows the convenience of completing classical data analysis with PEK as a mean to extract relevant knowledge in complex health domains. One of the major benefits of EbCA is iterative elicitation of IK.. Both explicit and tacit or implicit expert knowledge are critical to guide the scientific analysis of very complex decisional problems as those found in health system research. PMID:20920289

  13. Integrating clinicians, knowledge and data: expert-based cooperative analysis in healthcare decision support.

    PubMed

    Gibert, Karina; García-Alonso, Carlos; Salvador-Carulla, Luis

    2010-09-30

    Decision support in health systems is a highly difficult task, due to the inherent complexity of the process and structures involved. This paper introduces a new hybrid methodology Expert-based Cooperative Analysis (EbCA), which incorporates explicit prior expert knowledge in data analysis methods, and elicits implicit or tacit expert knowledge (IK) to improve decision support in healthcare systems. EbCA has been applied to two different case studies, showing its usability and versatility: 1) Bench-marking of small mental health areas based on technical efficiency estimated by EbCA-Data Envelopment Analysis (EbCA-DEA), and 2) Case-mix of schizophrenia based on functional dependency using Clustering Based on Rules (ClBR). In both cases comparisons towards classical procedures using qualitative explicit prior knowledge were made. Bayesian predictive validity measures were used for comparison with expert panels results. Overall agreement was tested by Intraclass Correlation Coefficient in case "1" and kappa in both cases. EbCA is a new methodology composed by 6 steps:. 1) Data collection and data preparation; 2) acquisition of "Prior Expert Knowledge" (PEK) and design of the "Prior Knowledge Base" (PKB); 3) PKB-guided analysis; 4) support-interpretation tools to evaluate results and detect inconsistencies (here Implicit Knowledg -IK- might be elicited); 5) incorporation of elicited IK in PKB and repeat till a satisfactory solution; 6) post-processing results for decision support. EbCA has been useful for incorporating PEK in two different analysis methods (DEA and Clustering), applied respectively to assess technical efficiency of small mental health areas and for case-mix of schizophrenia based on functional dependency. Differences in results obtained with classical approaches were mainly related to the IK which could be elicited by using EbCA and had major implications for the decision making in both cases. This paper presents EbCA and shows the convenience of completing classical data analysis with PEK as a mean to extract relevant knowledge in complex health domains. One of the major benefits of EbCA is iterative elicitation of IK.. Both explicit and tacit or implicit expert knowledge are critical to guide the scientific analysis of very complex decisional problems as those found in health system research.

  14. Improving Access to Care for Warfighters: Virtual Worlds Technology to Enhance Primary Care Training in Post-Traumatic Stress and Motivational Interviewing

    DTIC Science & Technology

    2017-10-01

    chronic mental and physical health problems. Therefore, the project aims to: (1) iteratively design a new web-based PTS and Motivational Interviewing...result in missed opportunities to intervene to prevent chronic mental and physical health problems. The project aims are to: (1) iteratively design a new...intervene to prevent chronic mental and physical health problems. We propose to: (1) Iteratively design a new web-based PTS and Motivational

  15. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  16. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  17. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  18. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  19. Final Report on ITER Task Agreement 81-10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brad J. Merrill

    An International Thermonuclear Experimental Reactor (ITER) Implementing Task Agreement (ITA) on Magnet Safety was established between the ITER International Organization (IO) and the Idaho National Laboratory (INL) Fusion Safety Program (FSP) during calendar year 2004. The objectives of this ITA were to add new capabilities to the MAGARC code and to use this updated version of MAGARC to analyze unmitigated superconductor quench events for both poloidal field (PF) and toroidal field (TF) coils of the ITER design. This report documents the completion of the work scope for this ITA. Based on the results obtained for this ITA, an unmitigated quenchmore » event in an ITER larger PF coil does not appear to be as severe an accident as in an ITER TF coil.« less

  20. The child's perspective as a guiding principle: Young children as co-designers in the design of an interactive application meant to facilitate participation in healthcare situations.

    PubMed

    Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas

    2016-06-01

    During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Shattered Pellet Injection Simulations With NIMROD

    NASA Astrophysics Data System (ADS)

    Kim, Charlson; Parks, Paul; Lao, Lang; Lehnan, Michael; Loarte, Alberto; Izzo, Valerie; Nimrod Team

    2017-10-01

    Shattered Pellet Injection (SPI) will be the Disruption Mitigation System in ITER. SPI propels a cryo-pellet of high-Z and deuterium into a sharp bend of the flight tube, shattering the pellet into a plume of shards. These shards are injected into the plasma to quench it and mitigate forces and heat loads that may damage in-vessel components. We use NIMROD to perform 3-D nonlinear MHD simulations of SPI to study the thermal quench. This work builds upon prior Massive Gas Injection (MGI) studies by Izzo. A Particle-in-Cell (PIC) model is implemented to mimic the shards, providing a discrete moving source. Observations indicate that the quench proceeds in two phases. Initially, the outer plasma is shed via interchange-like instabilities while preserving the core temperature. This results in a steep gradient and triggers the second phase, an external kink-like event that collapses the core. We report on the radiation efficiency and toroidal peaking as well as fueling efficiency and other metrics that assess the efficacy of the SPI system. Work supported by GA ITER Contract ITER/CT/14/4300001108 and US DOE DE-FG02-95ER54309.

  2. Elastic-plastic mixed-iterative finite element analysis: Implementation and performance assessment

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    An elastic-plastic algorithm based on Von Mises and associative flow criteria is implemented in MHOST-a mixed iterative finite element analysis computer program developed by NASA Lewis Research Center. The performance of the resulting elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors of 4-node quadrilateral shell finite elements are tested for elastic-plastic performance. Generally, the membrane results are excellent, indicating the implementation of elastic-plastic mixed-iterative analysis is appropriate.

  3. Kernel approach to molecular similarity based on iterative graph similarity.

    PubMed

    Rupp, Matthias; Proschak, Ewgenij; Schneider, Gisbert

    2007-01-01

    Similarity measures for molecules are of basic importance in chemical, biological, and pharmaceutical applications. We introduce a molecular similarity measure defined directly on the annotated molecular graph, based on iterative graph similarity and optimal assignments. We give an iterative algorithm for the computation of the proposed molecular similarity measure, prove its convergence and the uniqueness of the solution, and provide an upper bound on the required number of iterations necessary to achieve a desired precision. Empirical evidence for the positive semidefiniteness of certain parametrizations of our function is presented. We evaluated our molecular similarity measure by using it as a kernel in support vector machine classification and regression applied to several pharmaceutical and toxicological data sets, with encouraging results.

  4. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  5. A Semi-Discrete Landweber-Kaczmarz Method for Cone Beam Tomography and Laminography Exploiting Geometric Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2016-12-01

    We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.

  6. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  7. On the Short Horizon of Spontaneous Iterative Reasoning in Logical Puzzles and Games

    ERIC Educational Resources Information Center

    Mazzocco, Ketti; Cherubini, Anna Maria; Cherubini, Paolo

    2013-01-01

    A reasoning strategy is iterative when the initial conclusion suggested by a set of premises is integrated into that set of premises in order to yield additional conclusions. Previous experimental studies on game theory-based strategic games (such as the beauty contest game) observed difficulty in reasoning iteratively, which has been partly…

  8. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  9. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  10. Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John

    2018-05-01

    This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.

  11. Efficient Solution of Three-Dimensional Problems of Acoustic and Electromagnetic Scattering by Open Surfaces

    NASA Technical Reports Server (NTRS)

    Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian

    2011-01-01

    We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.

  12. Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels

    NASA Astrophysics Data System (ADS)

    Li, Husheng; Betz, Sharon M.; Poor, H. Vincent

    2007-05-01

    This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.

  13. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  14. Methods and Systems for Characterization of an Anomaly Using Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M. (Inventor)

    2013-01-01

    A method for characterizing an anomaly in a material comprises (a) extracting contrast data; (b) measuring a contrast evolution; (c) filtering the contrast evolution; (d) measuring a peak amplitude of the contrast evolution; (d) determining a diameter and a depth of the anomaly, and (e) repeating the step of determining the diameter and the depth of the anomaly until a change in the estimate of the depth is less than a set value. The step of determining the diameter and the depth of the anomaly comprises estimating the depth using a diameter constant C.sub.D equal to one for the first iteration of determining the diameter and the depth; estimating the diameter; and comparing the estimate of the depth of the anomaly after each iteration of estimating to the prior estimate of the depth to calculate the change in the estimate of the depth of the anomaly.

  15. From rationality to cooperativeness: The totally mixed Nash equilibrium in Markov strategies in the iterated Prisoner's Dilemma.

    PubMed

    Menshikov, Ivan S; Shklover, Alexsandr V; Babkina, Tatiana S; Myagkov, Mikhail G

    2017-01-01

    In this research, the social behavior of the participants in a Prisoner's Dilemma laboratory game is explained on the basis of the quantal response equilibrium concept and the representation of the game in Markov strategies. In previous research, we demonstrated that social interaction during the experiment has a positive influence on cooperation, trust, and gratefulness. This research shows that the quantal response equilibrium concept agrees only with the results of experiments on cooperation in Prisoner's Dilemma prior to social interaction. However, quantal response equilibrium does not explain of participants' behavior after social interaction. As an alternative theoretical approach, an examination was conducted of iterated Prisoner's Dilemma game in Markov strategies. We built a totally mixed Nash equilibrium in this game; the equilibrium agrees with the results of the experiments both before and after social interaction.

  16. Tritium saturation in plasma-facing materials surfaces1

    NASA Astrophysics Data System (ADS)

    Longhurst, Glen R.; Anderl, Robert A.; Causey, Rion A.; Federici, Gianfranco; Haasz, Anthony A.; Pawelko, Robert J.

    1998-10-01

    Plasma-facing components in the International Thermonuclear Experimental Reactor (ITER) will experience high heat loads and intense plasma fluxes of order 10 20-10 23 particles/m 2s. Experiments on Be and W, two of the materials considered for use in ITER, have revealed that a tritium saturation phenomenon can take place under these conditions in which damage to the surface results that enhances the return of implanted tritium to the plasma and inhibits uptake of tritium. This phenomenon is important because it implies that tritium inventories due to implantation in these plasma-facing materials will probably be lower than was previously estimated using classical recombination-limited release at the plasma surface. Similarly, permeation through these components to the coolant streams should be reduced. In this paper we discuss evidences for the existence of this phenomenon, describe techniques for modeling it, and present results of the application of such modeling to prior experiments.

  17. From rationality to cooperativeness: The totally mixed Nash equilibrium in Markov strategies in the iterated Prisoner’s Dilemma

    PubMed Central

    Myagkov, Mikhail G.

    2017-01-01

    In this research, the social behavior of the participants in a Prisoner's Dilemma laboratory game is explained on the basis of the quantal response equilibrium concept and the representation of the game in Markov strategies. In previous research, we demonstrated that social interaction during the experiment has a positive influence on cooperation, trust, and gratefulness. This research shows that the quantal response equilibrium concept agrees only with the results of experiments on cooperation in Prisoner’s Dilemma prior to social interaction. However, quantal response equilibrium does not explain of participants’ behavior after social interaction. As an alternative theoretical approach, an examination was conducted of iterated Prisoner's Dilemma game in Markov strategies. We built a totally mixed Nash equilibrium in this game; the equilibrium agrees with the results of the experiments both before and after social interaction. PMID:29190280

  18. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  19. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  20. Breadboard RL10-2B low-thrust operating mode (second iteration) test report

    NASA Technical Reports Server (NTRS)

    Kanic, Paul G.; Kaldor, Raymond B.; Watkins, Pia M.

    1988-01-01

    Cryogenic rocket engines requiring a cooling process to thermally condition the engine to operating temperature can be made more efficient if cooling propellants can be burned. Tank head idle and pumped idle modes can be used to burn propellants employed for cooling, thereby providing useful thrust. Such idle modes required the use of a heat exchanger to vaporize oxygen prior to injection into the combustion chamber. During December 1988, Pratt and Whitney conducted a series of engine hot firing demonstrating the operation of two new, previously untested oxidizer heat exchanger designs. The program was a second iteration of previous low thrust testing conducted in 1984, during which a first-generation heat exchanger design was used. Although operation was demonstrated at tank head idle and pumped idle, the engine experienced instability when propellants could not be supplied to the heat exchanger at design conditions.

  1. STANDARDIZING THE STRUCTURE OF STROKE CLINICAL AND EPIDEMIOLOGIC RESEARCH DATA: THE NINDS STROKE COMMON DATA ELEMENT (CDE) PROJECT

    PubMed Central

    Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee

    2012-01-01

    Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239

  2. Designing for deeper learning in a blended computer science course for middle school students

    NASA Astrophysics Data System (ADS)

    Grover, Shuchi; Pea, Roy; Cooper, Stephen

    2015-04-01

    The focus of this research was to create and test an introductory computer science course for middle school. Titled "Foundations for Advancing Computational Thinking" (FACT), the course aims to prepare and motivate middle school learners for future engagement with algorithmic problem solving. FACT was also piloted as a seven-week course on Stanford's OpenEdX MOOC platform for blended in-class learning. Unique aspects of FACT include balanced pedagogical designs that address the cognitive, interpersonal, and intrapersonal aspects of "deeper learning"; a focus on pedagogical strategies for mediating and assessing for transfer from block-based to text-based programming; curricular materials for remedying misperceptions of computing; and "systems of assessments" (including formative and summative quizzes and tests, directed as well as open-ended programming assignments, and a transfer test) to get a comprehensive picture of students' deeper computational learning. Empirical investigations, accomplished over two iterations of a design-based research effort with students (aged 11-14 years) in a public school, sought to examine student understanding of algorithmic constructs, and how well students transferred this learning from Scratch to text-based languages. Changes in student perceptions of computing as a discipline were measured. Results and mixed-method analyses revealed that students in both studies (1) achieved substantial learning gains in algorithmic thinking skills, (2) were able to transfer their learning from Scratch to a text-based programming context, and (3) achieved significant growth toward a more mature understanding of computing as a discipline. Factor analyses of prior computing experience, multivariate regression analyses, and qualitative analyses of student projects and artifact-based interviews were conducted to better understand the factors affecting learning outcomes. Prior computing experiences (as measured by a pretest) and math ability were found to be strong predictors of learning outcomes.

  3. Virtual patients: practical advice for clinical authors using Labyrinth.

    PubMed

    Begg, Michael

    2010-09-01

    Labyrinth is a tool originally developed in the University of Edinburgh's Learning Technology Section for authoring and delivering branching case scenarios. The scenarios can incorporate game-informed elements such as scoring, randomising, avatars and counters. Labyrinth has grown more popular internationally since a version of the build was made available on the open source network Source Forge. This paper offers help and advice for clinical educators interested in creating cases. Labyrinth is increasingly recognised as a tool offering great potential for delivering cases that promote rich, situated learning opportunities for learners. There are, however, significant challenges to generating such cases, not least of which is the challenge for potential authors in approaching the process of constructing narrative-rich, context-sensitive cases in an unfamiliar authoring environment. This paper offers a brief overview of the principles informing Labyrinth cases (game-informed learning), and offers some practical advice to better prepare educators with little or no prior experience. Labyrinth has continued to grow and develop, from its roots as a research and development environment to one that is optimised for use by non-technical clinical educators. The process becomes increasingly iterative and better informed as the teaching community push the software further. The positive implications of providing practical advice and concept insight to new case authors is that it ideally leads to a broader base of users who will inform future iterations of the software. © Blackwell Publishing Ltd 2010.

  4. A self-adapting heuristic for automatically constructing terrain appreciation exercises

    NASA Astrophysics Data System (ADS)

    Nanda, S.; Lickteig, C. L.; Schaefer, P. S.

    2008-04-01

    Appreciating terrain is a key to success in both symmetric and asymmetric forms of warfare. Training to enable Soldiers to master this vital skill has traditionally required their translocation to a selected number of areas, each affording a desired set of topographical features, albeit with limited breadth of variety. As a result, the use of such methods has proved to be costly and time consuming. To counter this, new computer-aided training applications permit users to rapidly generate and complete training exercises in geo-specific open and urban environments rendered by high-fidelity image generation engines. The latter method is not only cost-efficient, but allows any given exercise and its conditions to be duplicated or systematically varied over time. However, even such computer-aided applications have shortcomings. One of the principal ones is that they usually require all training exercises to be painstakingly constructed by a subject matter expert. Furthermore, exercise difficulty is usually subjectively assessed and frequently ignored thereafter. As a result, such applications lack the ability to grow and adapt to the skill level and learning curve of each trainee. In this paper, we present a heuristic that automatically constructs exercises for identifying key terrain. Each exercise is created and administered in a unique iteration, with its level of difficulty tailored to the trainee's ability based on the correctness of that trainee's responses in prior iterations.

  5. Density control in ITER: an iterative learning control and robust control approach

    NASA Astrophysics Data System (ADS)

    Ravensbergen, T.; de Vries, P. C.; Felici, F.; Blanken, T. C.; Nouailletas, R.; Zabeo, L.

    2018-01-01

    Plasma density control for next generation tokamaks, such as ITER, is challenging because of multiple reasons. The response of the usual gas valve actuators in future, larger fusion devices, might be too slow for feedback control. Both pellet fuelling and the use of feedforward-based control may help to solve this problem. Also, tight density limits arise during ramp-up, due to operational limits related to divertor detachment and radiative collapses. As the number of shots available for controller tuning will be limited in ITER, in this paper, iterative learning control (ILC) is proposed to determine optimal feedforward actuator inputs based on tracking errors, obtained in previous shots. This control method can take the actuator and density limits into account and can deal with large actuator delays. However, a purely feedforward-based density control may not be sufficient due to the presence of disturbances and shot-to-shot differences. Therefore, robust control synthesis is used to construct a robustly stabilizing feedback controller. In simulations, it is shown that this combined controller strategy is able to achieve good tracking performance in the presence of shot-to-shot differences, tight constraints, and model mismatches.

  6. Cochlea segmentation using iterated random walks with shape prior

    NASA Astrophysics Data System (ADS)

    Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Vera, Sergio; Ceresa, Mario; González Ballester, Miguel Ángel

    2016-03-01

    Cochlear implants can restore hearing to deaf or partially deaf patients. In order to plan the intervention, a model from high resolution µCT images is to be built from accurate cochlea segmentations and then, adapted to a patient-specific model. Thus, a precise segmentation is required to build such a model. We propose a new framework for segmentation of µCT cochlear images using random walks where a region term is combined with a distance shape prior weighted by a confidence map to adjust its influence according to the strength of the image contour. Then, the region term can take advantage of the high contrast between the background and foreground and the distance prior guides the segmentation to the exterior of the cochlea as well as to less contrasted regions inside the cochlea. Finally, a refinement is performed preserving the topology using a topological method and an error control map to prevent boundary leakage. We tested the proposed approach with 10 datasets and compared it with the latest techniques with random walks and priors. The experiments suggest that this method gives promising results for cochlea segmentation.

  7. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  8. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  9. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  10. A general Bayesian image reconstruction algorithm with entropy prior: Preliminary application to HST data

    NASA Astrophysics Data System (ADS)

    Nunez, Jorge; Llacer, Jorge

    1993-10-01

    This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.

  11. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  12. SU-E-I-87: Automated Liver Segmentation Method for CBCT Dataset by Combining Sparse Shape Composition and Probabilistic Atlas Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Liu, Li; Chen, Jinhu

    2014-06-01

    Purpose: The aiming of this study was to extract liver structures for daily Cone beam CT (CBCT) images automatically. Methods: Datasets were collected from 50 intravenous contrast planning CT images, which were regarded as training dataset for probabilistic atlas and shape prior model construction. Firstly, probabilistic atlas and shape prior model based on sparse shape composition (SSC) were constructed by iterative deformable registration. Secondly, the artifacts and noise were removed from the daily CBCT image by an edge-preserving filtering using total variation with L1 norm (TV-L1). Furthermore, the initial liver region was obtained by registering the incoming CBCT image withmore » the atlas utilizing edge-preserving deformable registration with multi-scale strategy, and then the initial liver region was converted to surface meshing which was registered with the shape model where the major variation of specific patient was modeled by sparse vectors. At the last stage, the shape and intensity information were incorporated into joint probabilistic model, and finally the liver structure was extracted by maximum a posteriori segmentation.Regarding the construction process, firstly the manually segmented contours were converted into meshes, and then arbitrary patient data was chosen as reference image to register with the rest of training datasets by deformable registration algorithm for constructing probabilistic atlas and prior shape model. To improve the efficiency of proposed method, the initial probabilistic atlas was used as reference image to register with other patient data for iterative construction for removing bias caused by arbitrary selection. Results: The experiment validated the accuracy of the segmentation results quantitatively by comparing with the manually ones. The volumetric overlap percentage between the automatically generated liver contours and the ground truth were on an average 88%–95% for CBCT images. Conclusion: The experiment demonstrated that liver structures of CBCT with artifacts can be extracted accurately for following adaptive radiation therapy. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  13. Mid-Level Planning and Control for Articulated Locomoting Systems

    DTIC Science & Technology

    2017-02-12

    accelerometers and gyros into each module of our snake robots. Prior work from our group has already used an extended Kalman filter (EKF) to fuse these distributed...body frame is performed as part of the measurement model at every iteration of the filter , using an SVD to identify the principle components of the...addi- tion the conventional EKF, although we found that all three methods worked equally well. All three filters used the same process and measurement

  14. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  15. Suppression of tritium retention in remote areas of ITER by nonperturbative reactive gas injection.

    PubMed

    Tabarés, F L; Ferreira, J A; Ramos, A; van Rooij, G; Westerhout, J; Al, R; Rapp, J; Drenik, A; Mozetic, M

    2010-10-22

    A technique based on reactive gas injection in the afterglow region of the divertor plasma is proposed for the suppression of tritium-carbon codeposits in remote areas of ITER when operated with carbon-based divertor targets. Experiments in a divertor simulator plasma device indicate that a 4  nm/min deposition can be suppressed by addition of 1  Pa·m³ s⁻¹ ammonia flow at 10 cm from the plasma. These results bolster the concept of nonperturbative scavenger injection for tritium inventory control in carbon-based fusion plasma devices, thus paving the way for ITER operation in the active phase under a carbon-dominated, plasma facing component background.

  16. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    PubMed

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  17. Left ventricle segmentation via graph cut distribution matching.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron

    2009-01-01

    We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

  18. Adaptive distance metric learning for diffusion tensor image segmentation.

    PubMed

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C N; Chu, Winnie C W

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework.

  19. Adaptive Distance Metric Learning for Diffusion Tensor Image Segmentation

    PubMed Central

    Kong, Youyong; Wang, Defeng; Shi, Lin; Hui, Steve C. N.; Chu, Winnie C. W.

    2014-01-01

    High quality segmentation of diffusion tensor images (DTI) is of key interest in biomedical research and clinical application. In previous studies, most efforts have been made to construct predefined metrics for different DTI segmentation tasks. These methods require adequate prior knowledge and tuning parameters. To overcome these disadvantages, we proposed to automatically learn an adaptive distance metric by a graph based semi-supervised learning model for DTI segmentation. An original discriminative distance vector was first formulated by combining both geometry and orientation distances derived from diffusion tensors. The kernel metric over the original distance and labels of all voxels were then simultaneously optimized in a graph based semi-supervised learning approach. Finally, the optimization task was efficiently solved with an iterative gradient descent method to achieve the optimal solution. With our approach, an adaptive distance metric could be available for each specific segmentation task. Experiments on synthetic and real brain DTI datasets were performed to demonstrate the effectiveness and robustness of the proposed distance metric learning approach. The performance of our approach was compared with three classical metrics in the graph based semi-supervised learning framework. PMID:24651858

  20. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306

  1. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  2. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  3. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  4. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  5. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom.

    PubMed

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One

    2016-03-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.

  6. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  7. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  8. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  9. Iterative inversion of deformation vector fields with feedback control.

    PubMed

    Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei

    2018-05-14

    Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.

  10. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  11. Preliminary consideration of CFETR ITER-like case diagnostic system.

    PubMed

    Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  12. Preliminary consideration of CFETR ITER-like case diagnostic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. S.; Liu, Y. K.; Gao, X.

    2016-11-15

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less

  13. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  14. Asymptotic (h tending to infinity) absolute stability for BDFs applied to stiff differential equations. [Backward Differentiation Formulas

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Stewart, K.

    1984-01-01

    Methods based on backward differentiation formulas (BDFs) for solving stiff differential equations require iterating to approximate the solution of the corrector equation on each step. One hope for reducing the cost of this is to make do with iteration matrices that are known to have errors and to do no more iterations than are necessary to maintain the stability of the method. This paper, following work by Klopfenstein, examines the effect of errors in the iteration matrix on the stability of the method. Application of the results to an algorithm is discussed briefly.

  15. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  16. Detection of mouse liver cancer via a parallel iterative shrinkage method in hybrid optical/microcomputed tomography imaging

    NASA Astrophysics Data System (ADS)

    Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie

    2012-12-01

    Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.

  17. Property Changes of Cyanate Ester/epoxy Insulation Systems Caused by AN Iter-Like Double Impregnation and by Reactor Irradiation

    NASA Astrophysics Data System (ADS)

    Prokopec, R.; Humer, K.; Fillunger, H.; Maix, R. K.; Weber, H. W.

    2010-04-01

    Because of the double pancake design of the ITER TF coils the insulation will be applied in several steps. As a consequence, the conductor insulation as well as the pancake insulation will undergo multiple heat cycles in addition to the initial curing cycle. In particular the properties of the organic resin may be influenced, since its heat resistance is limited. Two identical types of sample consisting of wrapped R-glass/Kapton layers and vacuum impregnated with a cyanate ester/epoxy blend were prepared. The build-up of the reinforcement was identical for both insulation systems; however, one system was fabricated in two steps. In the first step only one half of the reinforcing layers was impregnated and cured. Afterwards the remaining layers were wrapped onto the already cured system, before the resulting system was impregnated and cured again. The mechanical properties were characterized prior to and after irradiation to fast neutron fluences of 1 and 2×1022 m-2 (E>0.1 MeV) in tension and interlaminar shear at 77 K. In order to simulate the pulsed operation of ITER, tension-tension fatigue measurements were performed in the load controlled mode. The results do not show any evidence for reduced mechanical strength caused by the additional heat cycle.

  18. Development of a domain-specific genetic language to design Chlamydomonas reinhardtii expression vectors.

    PubMed

    Wilson, Mandy L; Okumoto, Sakiko; Adam, Laura; Peccoud, Jean

    2014-01-15

    Expression vectors used in different biotechnology applications are designed with domain-specific rules. For instance, promoters, origins of replication or homologous recombination sites are host-specific. Similarly, chromosomal integration or viral delivery of an expression cassette imposes specific structural constraints. As de novo gene synthesis and synthetic biology methods permeate many biotechnology specialties, the design of application-specific expression vectors becomes the new norm. In this context, it is desirable to formalize vector design strategies applicable in different domains. Using the design of constructs to express genes in the chloroplast of Chlamydomonas reinhardtii as an example, we show that a vector design strategy can be formalized as a domain-specific language. We have developed a graphical editor of context-free grammars usable by biologists without prior exposure to language theory. This environment makes it possible for biologists to iteratively improve their design strategies throughout the course of a project. It is also possible to ensure that vectors designed with early iterations of the language are consistent with the latest iteration of the language. The context-free grammar editor is part of the GenoCAD application. A public instance of GenoCAD is available at http://www.genocad.org. GenoCAD source code is available from SourceForge and licensed under the Apache v2.0 open source license.

  19. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  20. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  1. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  2. Arc detection for the ICRF system on ITER

    NASA Astrophysics Data System (ADS)

    D'Inca, R.

    2011-12-01

    The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.

  3. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  4. DART: a practical reconstruction algorithm for discrete tomography.

    PubMed

    Batenburg, Kees Joost; Sijbers, Jan

    2011-09-01

    In this paper, we present an iterative reconstruction algorithm for discrete tomography, called discrete algebraic reconstruction technique (DART). DART can be applied if the scanned object is known to consist of only a few different compositions, each corresponding to a constant gray value in the reconstruction. Prior knowledge of the gray values for each of the compositions is exploited to steer the current reconstruction towards a reconstruction that contains only these gray values. Based on experiments with both simulated CT data and experimental μCT data, it is shown that DART is capable of computing more accurate reconstructions from a small number of projection images, or from a small angular range, than alternative methods. It is also shown that DART can deal effectively with noisy projection data and that the algorithm is robust with respect to errors in the estimation of the gray values.

  5. A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.

    PubMed

    Demircan-Tureyen, Ezgi; Kamasak, Mustafa E

    2015-01-01

    Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.

  6. Is probabilistic bias analysis approximately Bayesian?

    PubMed Central

    MacLehose, Richard F.; Gustafson, Paul

    2011-01-01

    Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311

  7. Mutual information, neural networks and the renormalization group

    NASA Astrophysics Data System (ADS)

    Koch-Janusz, Maciej; Ringel, Zohar

    2018-06-01

    Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.

  8. Robust design of feedback feed-forward iterative learning control based on 2D system theory for linear uncertain systems

    NASA Astrophysics Data System (ADS)

    Li, Zhifu; Hu, Yueming; Li, Di

    2016-08-01

    For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.

  9. Changing the Way We Build Games: A Design-Based Research Study Examining the Implementation of Homemade PowerPoint Games in the Classroom

    ERIC Educational Resources Information Center

    Siko, Jason Paul

    2012-01-01

    This design-based research study examined the effects of a game design project on student test performance, with refinements made to the implementation after each of the three iterations of the study. The changes to the implementation over the three iterations were based on the literature for the three justifications for the use of homemade…

  10. Development of an evidence-based review with recommendations using an online iterative process.

    PubMed

    Rudmik, Luke; Smith, Timothy L

    2011-01-01

    The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  11. Development of an autonomous treatment planning strategy for radiation therapy with effective use of population-based prior data.

    PubMed

    Wang, Huan; Dong, Peng; Liu, Hongcheng; Xing, Lei

    2017-02-01

    Current treatment planning remains a costly and labor intensive procedure and requires multiple trial-and-error adjustments of system parameters such as the weighting factors and prescriptions. The purpose of this work is to develop an autonomous treatment planning strategy with effective use of prior knowledge and in a clinically realistic treatment planning platform to facilitate radiation therapy workflow. Our technique consists of three major components: (i) a clinical treatment planning system (TPS); (ii) a formulation of decision-function constructed using an assemble of prior treatment plans; (iii) a plan evaluator or decision-function and an outer-loop optimization independent of the clinical TPS to assess the TPS-generated plan and to drive the search toward a solution optimizing the decision-function. Microsoft (MS) Visual Studio Coded UI is applied to record some common planner-TPS interactions as subroutines for querying and interacting with the TPS. These subroutines are called back in the outer-loop optimization program to navigate the plan selection process through the solution space iteratively. The utility of the approach is demonstrated by using clinical prostate and head-and-neck cases. An autonomous treatment planning technique with effective use of an assemble of prior treatment plans is developed to automatically maneuver the clinical treatment planning process in the platform of a commercial TPS. The process mimics the decision-making process of a human planner and provides a clinically sensible treatment plan automatically, thus reducing/eliminating the tedious manual trial-and-errors of treatment planning. It is found that the prostate and head-and-neck treatment plans generated using the approach compare favorably with that used for the patients' actual treatments. Clinical inverse treatment planning process can be automated effectively with the guidance of an assemble of prior treatment plans. The approach has the potential to significantly improve the radiation therapy workflow. © 2016 American Association of Physicists in Medicine.

  12. Iterative Frequency Domain Decision Feedback Equalization and Decoding for Underwater Acoustic Communications

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Ge, Jian-Hua

    2012-12-01

    Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.

  13. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  14. Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems

    NASA Technical Reports Server (NTRS)

    Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.

    1993-01-01

    In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).

  15. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  16. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.

  17. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  18. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs.

    PubMed

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-21

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.

  19. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    PubMed Central

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954

  20. Picard Trajectory Approximation Iteration for Efficient Orbit Propagation

    DTIC Science & Technology

    2015-07-21

    Eqn (8)) for an iterative ap- proximation of eccentric anomaly, and is transformed back to . The Lambert/ Kepler time – eccentric anomaly relationship...of Kepler motion based on spinor regulari- zation, Journal fur die Reine und Angewandt Mathematik 218, 204-219, 1965. [3] Levi-Civita T., Sur la...transformed back to  , ,x y z . The Lambert/ Kepler time – eccentric anomaly relationship is iterated by a Newton/Secant method to converge on the

  1. Three-dimensional deformable-model-based localization and recognition of road vehicles.

    PubMed

    Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong

    2012-01-01

    We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.

  2. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  3. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    PubMed

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  4. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  5. How Social and Nonsocial Context Affects Stay/Leave Decision-Making: The Influence of Actual and Expected Rewards

    PubMed Central

    Heijne, Amber; Sanfey, Alan G.

    2015-01-01

    This study investigated whether deciding to either stay with or leave a social relationship partner, based on a sequence of collaborative social interactions, is impacted by (1) observed and (2) anticipated gains and losses associated with the collaboration; and, importantly, (3) whether these effects differ between social and nonsocial contexts. In the social context, participants played an iterated collaborative economic game in which they were dependent on the successes and failures of a game partner in order to increase their monetary payoff, and in which they were free to stop collaborating with this partner whenever they chose. In Study 1, we manipulated the actual success rate of partners, and demonstrated that participants decided to stay longer with 'better' partners. In Study 2, we induced prior expectations about specific partners, while keeping the objective performance of all partners equal, and found that participants decided to stay longer with partners whom they expected to be 'better' than others, irrespective of actual performance. Importantly, both Study 1 and 2 included a nonsocial control condition that was probabilistically identical to the social conditions. All findings were replicated in nonsocial context, but results demonstrated that the effect of prior beliefs on stay/leave decision-making was much less pronounced in a social than a nonsocial context. PMID:26251999

  6. High-Dimensional Bayesian Geostatistics.

    PubMed

    Banerjee, Sudipto

    2017-06-01

    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.

  7. A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.

    PubMed

    Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe

    2018-01-01

    Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.

  8. Model-based virtual VSB mask writer verification for efficient mask error checking and optimization prior to MDP

    NASA Astrophysics Data System (ADS)

    Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong

    2014-10-01

    A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.

  9. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  10. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  11. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  12. Tradeoff between noise reduction and inartificial visualization in a model-based iterative reconstruction algorithm on coronary computed tomography angiography.

    PubMed

    Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki

    2018-05-01

    We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.

  13. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  14. The hyperbolic problem

    NASA Astrophysics Data System (ADS)

    Gualdesi, Lavinio

    2017-04-01

    Mooring lines in the Ocean might be seen as a pretty simple seamanlike activity. Connecting valuable scientific instrumentation to it transforms this simple activity into a sophisticated engineering support which needs to be accurately designed, developed, deployed, monitored and hopefully recovered with its precious load of scientific data. This work is an historical travel along the efforts carried out by scientists all over the world to successfully predict mooring line behaviour through both mathematical simulation and experimental verifications. It is at first glance unexpected how many factors one must observe to get closer and closer to a real ocean situation. Most models have dual applications for mooring lines and towed bodies lines equations. Numerous references are provided starting from the oldest one due to Isaac Newton. In his "Philosophiae Naturalis Principia Matematica" (1687) the English scientist, while discussing about the law of motion for bodies in resistant medium, is envisaging a hyperbolic fitting to the phenomenon including asymptotic behaviour in non-resistant media. A non-exhaustive set of mathematical simulations of the mooring lines trajectory prediction is listed hereunder to document how the subject has been under scientific focus over almost a century. Pode (1951) Prior personal computers diffusion a tabular form of calculus of cable geometry was used by generations of engineers keeping in mind the following limitations and approximations: tangential drag coefficients were assumed to be negligible. A steady current flow was assumed as in the towed configuration. Cchabra (1982) Finite Element Method that assumes an arbitrary deflection angle for the top first section and calculates equilibrium equations down to the sea floor iterating up to a compliant solution. Gualdesi (1987) ANAMOOR. A Fortran Program based on iterative methods above including experimental data from intensive mooring campaign. Database of experimental drag coefficients obtained in wind tunnel for the instrumentation verified in ocean mooring. Dangov (1987) A set of Fortran routines, due to a Canadian scientist, to analyse discrepancies between model and experimental data due to strumming effect on mooring line. Acoustic Doppler Current Profiler's data were adopted for the first time as an input for the model. Skop and O' Hara (1968) Static analysis of a three dimensional multi-leg model Knutson (1987) A model developed at David taylor Model basin based on towed models. Henry Berteaux (1990) SFMOOR Iterative FEM analysis fully fitted with mooring components data base developed by a WHOI scientist. Henry Berteaux (1990) SSMOOR Same model applied to sub-surface moorings. Gobats and Grosenbaugh (1998) Fully developed Method based on Strip Theory developed by WHOI scientists. Experimental validation results are not known.

  15. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  16. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  17. Cardiac phase-synchronized myocardial thallium-201 single-photon emission tomography using list mode data acquisition and iterative tomographic reconstruction.

    PubMed

    Vemmer, T; Steinbüchel, C; Bertram, J; Eschner, W; Kögler, A; Luig, H

    1997-03-01

    The purpose of this study was to determine whether data acquisition in the list mode and iterative tomographic reconstruction would render feasible cardiac phase-synchronized thallium-201 single-photon emission tomography (SPET) of the myocardium under routine conditions without modifications in tracer dose, acquisition time, or number of steps of the a gamma camera. Seventy non-selected patients underwent 201T1 SPET imaging according to a routine protocol (74 MBq/2 mCi 201T1, 180 degrees rotation of the gamma camera, 32 steps, 30 min). Gamma camera data, ECG, and a time signal were recorded in list mode. The cardiac cycle was divided into eight phases, the end-diastolic phase encompassing the QRS complex, and the end-systolic phase the T wave. Both phase- and non-phase-synchronized tomograms based on the same list mode data were reconstructed iteratively. Phase-synchronized and non-synchronized images were compared. Patients were divided into two groups depending on whether or not coronary artery disease had been definitely diagnosed prior to SPET imaging. The numbers of patients in both groups demonstrating defects visible on the phase-synchronized but not on the non-synchronized images were compared. It was found that both postexercise and redistribution phase tomograms were suited for interpretation. The changes from end-diastolic to end-systolic images allowed a comparative assessment of regional wall motility and tracer uptake. End-diastolic tomograms provided the best definition of defects. Additional defects not apparent on non-synchronized images were visible in 40 patients, six of whom did not show any defect on the non-synchronized images. Of 42 patients in whom coronary artery disease had been definitely diagnosed, 19 had additional defects not visible on the non-synchronized images, in comparison to 21 of 28 in whom coronary artery disease was suspected (P < 0.02; chi 2). It is concluded that cardiac phase-synchronized 201T1 SPET of the myocardium was made feasible by list mode data acquisition and iterative reconstruction. The additional findings on the phase-synchronized tomograms, not visible on the non-synchronized ones, represented genuine defects. Cardiac phase-synchronized 201T1 SPET is advantageous in allowing simultaneous assessment of regional wall motion and tracer uptake, and in visualizing smaller defects.

  18. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  19. Advanced Data Acquisition System Implementation for the ITER Neutron Diagnostic Use Case Using EPICS and FlexRIO Technology on a PXIe Platform

    NASA Astrophysics Data System (ADS)

    Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.

    2016-04-01

    To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.

  20. Optimizing Clinical Trial Enrollment Methods Through "Goal Programming"

    PubMed Central

    Davis, J.M.; Sandgren, A.J.; Manley, A.R.; Daleo, M.A.; Smith, S.S.

    2014-01-01

    Introduction Clinical trials often fail to reach desired goals due to poor recruitment outcomes, including low participant turnout, high recruitment cost, or poor representation of minorities. At present, there is limited literature available to guide recruitment methodology. This study, conducted by researchers at the University of Wisconsin Center for Tobacco Research and Intervention (UW-CTRI), provides an example of how iterative analysis of recruitment data may be used to optimize recruitment outcomes during ongoing recruitment. Study methodology UW-CTRI’s research team provided a description of methods used to recruit smokers in two randomized trials (n = 196 and n = 175). The trials targeted low socioeconomic status (SES) smokers and involved time-intensive smoking cessation interventions. Primary recruitment goals were to meet required sample size and provide representative diversity while working with limited funds and limited time. Recruitment data was analyzed repeatedly throughout each study to optimize recruitment outcomes. Results Estimates of recruitment outcomes based on prior studies on smoking cessation suggested that researchers would be able to recruit 240 low SES smokers within 30 months at a cost of $72,000. With employment of methods described herein, researchers were able to recruit 374 low SES smokers over 30 months at a cost of $36,260. Discussion Each human subjects study presents unique recruitment challenges with time and cost of recruitment dependent on the sample population and study methodology. Nonetheless, researchers may be able to improve recruitment outcomes though iterative analysis of recruitment data and optimization of recruitment methods throughout the recruitment period. PMID:25642125

  1. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    PubMed

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  2. Assessment of Ice Shape Roughness Using a Self-Orgainizing Map Approach

    NASA Technical Reports Server (NTRS)

    Mcclain, Stephen T.; Kreeger, Richard E.

    2013-01-01

    Self-organizing maps are neural-network techniques for representing noisy, multidimensional data aligned along a lower-dimensional and nonlinear manifold. For a large set of noisy data, each element of a finite set of codebook vectors is iteratively moved in the direction of the data closest to the winner codebook vector. Through successive iterations, the codebook vectors begin to align with the trends of the higher-dimensional data. Prior investigations of ice shapes have focused on using self-organizing maps to characterize mean ice forms. The Icing Research Branch has recently acquired a high resolution three dimensional scanner system capable of resolving ice shape surface roughness. A method is presented for the evaluation of surface roughness variations using high-resolution surface scans based on a self-organizing map representation of the mean ice shape. The new method is demonstrated for 1) an 18-in. NACA 23012 airfoil 2 AOA just after the initial ice coverage of the leading 5 of the suction surface of the airfoil, 2) a 21-in. NACA 0012 at 0AOA following coverage of the leading 10 of the airfoil surface, and 3) a cold-soaked 21-in.NACA 0012 airfoil without ice. The SOM method resulted in descriptions of the statistical coverage limits and a quantitative representation of early stages of ice roughness formation on the airfoils. Limitations of the SOM method are explored, and the uncertainty limits of the method are investigated using the non-iced NACA 0012 airfoil measurements.

  3. A Methodology for the Derivation of Unloaded Abdominal Aortic Aneurysm Geometry With Experimental Validation

    PubMed Central

    Chandra, Santanu; Gnanaruban, Vimalatharmaiyah; Riveros, Fabian; Rodriguez, Jose F.; Finol, Ender A.

    2016-01-01

    In this work, we present a novel method for the derivation of the unloaded geometry of an abdominal aortic aneurysm (AAA) from a pressurized geometry in turn obtained by 3D reconstruction of computed tomography (CT) images. The approach was experimentally validated with an aneurysm phantom loaded with gauge pressures of 80, 120, and 140 mm Hg. The unloaded phantom geometries estimated from these pressurized states were compared to the actual unloaded phantom geometry, resulting in mean nodal surface distances of up to 3.9% of the maximum aneurysm diameter. An in-silico verification was also performed using a patient-specific AAA mesh, resulting in maximum nodal surface distances of 8 μm after running the algorithm for eight iterations. The methodology was then applied to 12 patient-specific AAA for which their corresponding unloaded geometries were generated in 5–8 iterations. The wall mechanics resulting from finite element analysis of the pressurized (CT image-based) and unloaded geometries were compared to quantify the relative importance of using an unloaded geometry for AAA biomechanics. The pressurized AAA models underestimate peak wall stress (quantified by the first principal stress component) on average by 15% compared to the unloaded AAA models. The validation and application of the method, readily compatible with any finite element solver, underscores the importance of generating the unloaded AAA volume mesh prior to using wall stress as a biomechanical marker for rupture risk assessment. PMID:27538124

  4. On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations

    NASA Astrophysics Data System (ADS)

    Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.

    2015-08-01

    The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.

  5. ROBNCA: robust network component analysis for recovering transcription factor activities.

    PubMed

    Noor, Amina; Ahmad, Aitzaz; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem

    2013-10-01

    Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA

  6. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  7. Results of high heat flux qualification tests of W monoblock components for WEST

    NASA Astrophysics Data System (ADS)

    Greuner, H.; Böswirth, B.; Lipa, M.; Missirlian, M.; Richou, M.

    2017-12-01

    One goal of the WEST project (W Environment in Steady-state Tokamak) is the manufacturing, quality assessment and operation of ITER-like actively water-cooled divertor plasma facing components made of tungsten. Six W monoblock plasma facing units (PFUs) from different suppliers have been successfully evaluated in the high heat flux test facility GLADIS at IPP. Each PFU is equipped with 35 W monoblocks of an ITER-like geometry. However, the W blocks are made of different tungsten grades and the suppliers applied different bonding techniques between tungsten and the inserted Cu-alloy cooling tubes. The intention of the HHF test campaign was to assess the manufacturing quality of the PFUs on the basis of a statistical analysis of the surface temperature evolution of the individual W monoblocks during thermal loading with 100 cycles at 10 MW m-2. These tests confirm the non-destructive examinations performed by the manufacturer and CEA prior to the installation of the WEST platform, and no defects of the components were detected.

  8. Assessing Behavioural Manifestations Prior to Clinical Diagnosis of Huntington Disease: "Anger and Irritability" and "Obsessions and Compulsions"

    PubMed Central

    Vaccarino, Anthony L; Anonymous; Anderson, Karen E.; Borowsky, Beth; Coccaro, Emil; Craufurd, David; Endicott, Jean; Giuliano, Joseph; Groves, Mark; Guttman, Mark; Ho, Aileen K; Kupchak, Peter; Paulsen, Jane S.; Stanford, Matthew S.; van Kammen, Daniel P; Watson, David; Wu, Kevin D; Evans, Ken

    2011-01-01

    The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess "Anger and Irritability" and "Obsessions and Compulsions" in prHD individuals. PMID:21826116

  9. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  10. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  11. Influence of Ultra-Low-Dose and Iterative Reconstructions on the Visualization of Orbital Soft Tissues on Maxillofacial CT.

    PubMed

    Widmann, G; Juranek, D; Waldenberger, F; Schullian, P; Dennhardt, A; Hoermann, R; Steurer, M; Gassner, E-M; Puelacher, W

    2017-08-01

    Dose reduction on CT scans for surgical planning and postoperative evaluation of midface and orbital fractures is an important concern. The purpose of this study was to evaluate the variability of various low-dose and iterative reconstruction techniques on the visualization of orbital soft tissues. Contrast-to-noise ratios of the optic nerve and inferior rectus muscle and subjective scores of a human cadaver were calculated from CT with a reference dose protocol (CT dose index volume = 36.69 mGy) and a subsequent series of low-dose protocols (LDPs I-4: CT dose index volume = 4.18, 2.64, 0.99, and 0.53 mGy) with filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR)-50, ASIR-100, and model-based iterative reconstruction. The Dunn Multiple Comparison Test was used to compare each combination of protocols (α = .05). Compared with the reference dose protocol with FBP, the following statistically significant differences in contrast-to-noise ratios were shown (all, P ≤ .012) for the following: 1) optic nerve: LDP-I with FBP; LDP-II with FBP and ASIR-50; LDP-III with FBP, ASIR-50, and ASIR-100; and LDP-IV with FBP, ASIR-50, and ASIR-100; and 2) inferior rectus muscle: LDP-II with FBP, LDP-III with FBP and ASIR-50, and LDP-IV with FBP, ASIR-50, and ASIR-100. Model-based iterative reconstruction showed the best contrast-to-noise ratio in all images and provided similar subjective scores for LDP-II. ASIR-50 had no remarkable effect, and ASIR-100, a small effect on subjective scores. Compared with a reference dose protocol with FBP, model-based iterative reconstruction may show similar diagnostic visibility of orbital soft tissues at a CT dose index volume of 2.64 mGy. Low-dose technology and iterative reconstruction technology may redefine current reference dose levels in maxillofacial CT. © 2017 by American Journal of Neuroradiology.

  12. The effects of iterative reconstruction in CT on low-contrast liver lesion volumetry: a phantom study

    NASA Astrophysics Data System (ADS)

    Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas

    2017-03-01

    Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.

  13. Observer-based distributed adaptive iterative learning control for linear multi-agent systems

    NASA Astrophysics Data System (ADS)

    Li, Jinsha; Liu, Sanyang; Li, Junmin

    2017-10-01

    This paper investigates the consensus problem for linear multi-agent systems from the viewpoint of two-dimensional systems when the state information of each agent is not available. Observer-based fully distributed adaptive iterative learning protocol is designed in this paper. A local observer is designed for each agent and it is shown that without using any global information about the communication graph, all agents achieve consensus perfectly for all undirected connected communication graph when the number of iterations tends to infinity. The Lyapunov-like energy function is employed to facilitate the learning protocol design and property analysis. Finally, simulation example is given to illustrate the theoretical analysis.

  14. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems.

    PubMed

    Wang, An; Cao, Yang; Shi, Quan

    2018-01-01

    In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.

  15. A reduced complexity highly power/bandwidth efficient coded FQPSK system with iterative decoding

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Divsalar, D.

    2001-01-01

    Based on a representation of FQPSK as a trellis-coded modulation, this paper investigates the potential improvement in power efficiency obtained from the application of simple outer codes to form a concatenated coding arrangement with iterative decoding.

  16. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  17. An efficient spectral crystal plasticity solver for GPU architectures

    NASA Astrophysics Data System (ADS)

    Malahe, Michael

    2018-03-01

    We present a spectral crystal plasticity (CP) solver for graphics processing unit (GPU) architectures that achieves a tenfold increase in efficiency over prior GPU solvers. The approach makes use of a database containing a spectral decomposition of CP simulations performed using a conventional iterative solver over a parameter space of crystal orientations and applied velocity gradients. The key improvements in efficiency come from reducing global memory transactions, exposing more instruction-level parallelism, reducing integer instructions and performing fast range reductions on trigonometric arguments. The scheme also makes more efficient use of memory than prior work, allowing for larger problems to be solved on a single GPU. We illustrate these improvements with a simulation of 390 million crystal grains on a consumer-grade GPU, which executes at a rate of 2.72 s per strain step.

  18. Image quality of CT angiography in young children with congenital heart disease: a comparison between the sinogram-affirmed iterative reconstruction (SAFIRE) and advanced modelled iterative reconstruction (ADMIRE) algorithms.

    PubMed

    Nam, S B; Jeong, D W; Choo, K S; Nam, K J; Hwang, J-Y; Lee, J W; Kim, J Y; Lim, S J

    2017-12-01

    To compare the image quality of computed tomography angiography (CTA) reconstructed by sinogram-affirmed iterative reconstruction (SAFIRE) with that of advanced modelled iterative reconstruction (ADMIRE) in children with congenital heart disease (CHD). Thirty-one children (8.23±13.92 months) with CHD who underwent CTA were enrolled. Images were reconstructed using SAFIRE (strength 5) and ADMIRE (strength 5). Objective image qualities (attenuation, noise) were measured in the great vessels and heart chambers. Two radiologists independently calculated the contrast-to-noise ratio (CNR) by measuring the intensity and noise of the myocardial walls. Subjective noise, diagnostic confidence, and sharpness at the level prior to the first branch of the main pulmonary artery were also graded by the two radiologists independently. The objective image noise of ADMIRE was significantly lower than that of SAFIRE in the right atrium, right ventricle, and myocardial wall (p<0.05); however, there were no significant differences observed in the attenuations among the four chambers and great vessels, except in the pulmonary arteries (p>0.05). The mean CNR values were 21.56±10.80 for ADMIRE and 18.21±6.98 for SAFIRE, which were significantly different (p<0.05). In addition, the diagnostic confidence of ADMIRE was significantly lower than that of SAFIRE (p<0.05), while the subjective image noise and sharpness of ADMIRE were not significantly different (p>0.05). CTA using ADMIRE was superior to SAFIRE when comparing the objective and subjective image quality in children with CHD. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  19. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  20. A new Monte Carlo-based treatment plan optimization approach for intensity modulated radiation therapy.

    PubMed

    Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun

    2015-04-07

    Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation time including both MC dose calculations and plan optimizations was reduced by a factor of 4.4, from 494 to 113 s, using only one GPU card.

  1. Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.

    PubMed

    Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos

    2013-11-04

    In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.

  2. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  3. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  4. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  5. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  6. Analytic approximations of Von Kármán plate under arbitrary uniform pressure—equations in integral form

    NASA Astrophysics Data System (ADS)

    Zhong, XiaoXu; Liao, ShiJun

    2018-01-01

    Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.

  7. Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.

    PubMed

    Fu, Zhaohao; Hao, Lingxin

    2018-01-15

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  8. Agent-based modeling of China's rural-urban migration and social network structure

    NASA Astrophysics Data System (ADS)

    Fu, Zhaohao; Hao, Lingxin

    2018-01-01

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  9. Seismic facies analysis based on self-organizing map and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian

    2015-01-01

    Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.

  10. Collective odor source estimation and search in time-variant airflow environments using mobile robots.

    PubMed

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots' search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot's detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection-diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method.

  11. Collective Odor Source Estimation and Search in Time-Variant Airflow Environments Using Mobile Robots

    PubMed Central

    Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Zeng, Ming

    2011-01-01

    This paper addresses the collective odor source localization (OSL) problem in a time-varying airflow environment using mobile robots. A novel OSL methodology which combines odor-source probability estimation and multiple robots’ search is proposed. The estimation phase consists of two steps: firstly, the separate probability-distribution map of odor source is estimated via Bayesian rules and fuzzy inference based on a single robot’s detection events; secondly, the separate maps estimated by different robots at different times are fused into a combined map by way of distance based superposition. The multi-robot search behaviors are coordinated via a particle swarm optimization algorithm, where the estimated odor-source probability distribution is used to express the fitness functions. In the process of OSL, the estimation phase provides the prior knowledge for the searching while the searching verifies the estimation results, and both phases are implemented iteratively. The results of simulations for large-scale advection–diffusion plume environments and experiments using real robots in an indoor airflow environment validate the feasibility and robustness of the proposed OSL method. PMID:22346650

  12. Nonlinear Burn Control in Tokamaks using Heating, Non-axisymmetric Magnetic Fields, Isotopic fueling and Impurity injection

    NASA Astrophysics Data System (ADS)

    Pajares, Andres; Schuster, Eugenio

    2016-10-01

    Plasma density and temperature regulation in future tokamaks such as ITER is arising as one of the main problems in nuclear-fusion control research. The problem, known as burn control, is to regulate the amount of fusion power produced by the burning plasma while avoiding thermal instabilities. Prior work in the area of burn control considered different actuators, such as modulation of the auxiliary power, modulation of the fueling rate, and controlled impurity injection. More recently, the in-vessel coil system was suggested as a feasible actuator since it has the capability of modifying the plasma confinement by generating non-axisymmetric magnetic fields. In this work, a comprehensive, model-based, nonlinear burn control strategy is proposed to integrate all the previously mentioned actuators. A model to take into account the influence of the in-vessel coils on the plasma confinement is proposed based on the plasma collisionality and the density. A simulation study is carried out to show the capability of the controller to drive the system between different operating points while rejecting perturbations. Supported by the US DOE under DE-SC0010661.

  13. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    PubMed

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  14. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    NASA Astrophysics Data System (ADS)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  15. Investigation of KDP crystal surface based on an improved bidimensional empirical mode decomposition method

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi

    2018-03-01

    This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.

  16. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung

    PubMed Central

    Guo, Shengwen; Fei, Baowei

    2013-01-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531

  17. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  18. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    PubMed

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  19. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  20. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    2000-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  1. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  2. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. An Intensive, Simulation-Based Communication Course for Pediatric Critical Care Medicine Fellows.

    PubMed

    Johnson, Erin M; Hamilton, Melinda F; Watson, R Scott; Claxton, Rene; Barnett, Michael; Thompson, Ann E; Arnold, Robert

    2017-08-01

    Effective communication among providers, families, and patients is essential in critical care but is often inadequate in the PICU. To address the lack of communication education pediatric critical care medicine fellows receive, the Children's Hospital of Pittsburgh PICU developed a simulation-based communication course, Pediatric Critical Care Communication course. Pediatric critical care medicine trainees have limited prior training in communication and will have increased confidence in their communication skills after participating in the Pediatric Critical Care Communication course. Pediatric Critical Care Communication is a 3-day course taken once during fellowship featuring simulation with actors portraying family members. Off-site conference space as part of a pediatric critical care medicine educational curriculum. Pediatric Critical Care Medicine Fellows. Didactic sessions and interactive simulation scenarios. Prior to and after the course, fellows complete an anonymous survey asking about 1) prior instruction in communication, 2) preparedness for difficult conversations, 3) attitudes about end-of-life care, and 4) course satisfaction. We compared pre- and postcourse surveys using paired Student t test. Most of the 38 fellows who participated over 4 years had no prior communication training in conducting a care conference (70%), providing bad news (57%), or discussing end-of-life options (75%). Across all four iterations of the course, fellows after the course reported increased confidence across many topics of communication, including giving bad news, conducting a family conference, eliciting both a family's emotional reaction to their child's illness and their concerns at the end of a child's life, discussing a child's code status, and discussing religious issues. Specifically, fellows in 2014 reported significant increases in self-perceived preparedness to provide empathic communication to families regarding many aspects of discussing critical care, end-of-life care, and religious issues with patients' families (p < 0.05). The majority of fellows (90%) recommended that the course be required in pediatric critical care medicine fellowship. The Pediatric Critical Care Communication course increased fellow confidence in having difficult discussions common in the PICU. Fellows highly recommend it as part of PICU education. Further work should focus on the course's impact on family satisfaction with fellow communication.

  4. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  5. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  6. Sparse magnetic resonance imaging reconstruction using the bregman iteration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo

    2013-01-01

    Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.

  7. Multishot cartesian turbo spin-echo diffusion imaging using iterative POCSMUSE Reconstruction.

    PubMed

    Zhang, Zhe; Zhang, Bing; Li, Ming; Liang, Xue; Chen, Xiaodong; Liu, Renyuan; Zhang, Xin; Guo, Hua

    2017-07-01

    To report a diffusion imaging technique insensitive to off-resonance artifacts and motion-induced ghost artifacts using multishot Cartesian turbo spin-echo (TSE) acquisition and iterative POCS-based reconstruction of multiplexed sensitivity encoded magnetic resonance imaging (MRI) (POCSMUSE) for phase correction. Phase insensitive diffusion preparation was used to deal with the violation of the Carr-Purcell-Meiboom-Gill (CPMG) conditions of TSE diffusion-weighted imaging (DWI), followed by a multishot Cartesian TSE readout for data acquisition. An iterative diffusion phase correction method, iterative POCSMUSE, was developed and implemented to eliminate the ghost artifacts in multishot TSE DWI. The in vivo human brain diffusion images (from one healthy volunteer and 10 patients) using multishot Cartesian TSE were acquired at 3T and reconstructed using iterative POCSMUSE, and compared with single-shot and multishot echo-planar imaging (EPI) results. These images were evaluated by two radiologists using visual scores (considering both image quality and distortion levels) from 1 to 5. The proposed iterative POCSMUSE reconstruction was able to correct the ghost artifacts in multishot DWI. The ghost-to-signal ratio of TSE DWI using iterative POCSMUSE (0.0174 ± 0.0024) was significantly (P < 0.0005) smaller than using POCSMUSE (0.0253 ± 0.0040). The image scores of multishot TSE DWI were significantly higher than single-shot (P = 0.004 and 0.006 from two reviewers) and multishot (P = 0.008 and 0.004 from two reviewers) EPI-based methods. The proposed multishot Cartesian TSE DWI using iterative POCSMUSE reconstruction can provide high-quality diffusion images insensitive to motion-induced ghost artifacts and off-resonance related artifacts such as chemical shifts and susceptibility-induced image distortions. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:167-174. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Source apportionment for fine particulate matter in a Chinese city using an improved gas-constrained method and comparison with multiple receptor models.

    PubMed

    Shi, Guoliang; Liu, Jiayuan; Wang, Haiting; Tian, Yingze; Wen, Jie; Shi, Xurong; Feng, Yinchang; Ivey, Cesunica E; Russell, Armistead G

    2018-02-01

    PM 2.5 is one of the most studied atmospheric pollutants due to its adverse impacts on human health and welfare and the environment. An improved model (the chemical mass balance gas constraint-Iteration: CMBGC-Iteration) is proposed and applied to identify source categories and estimate source contributions of PM 2.5. The CMBGC-Iteration model uses the ratio of gases to PM as constraints and considers the uncertainties of source profiles and receptor datasets, which is crucial information for source apportionment. To apply this model, samples of PM 2.5 were collected at Tianjin, a megacity in northern China. The ambient PM 2.5 dataset, source information, and gas-to-particle ratios (such as SO 2 /PM 2.5 , CO/PM 2.5 , and NOx/PM 2.5 ratios) were introduced into the CMBGC-Iteration to identify the potential sources and their contributions. Six source categories were identified by this model and the order based on their contributions to PM 2.5 was as follows: secondary sources (30%), crustal dust (25%), vehicle exhaust (16%), coal combustion (13%), SOC (7.6%), and cement dust (0.40%). In addition, the same dataset was also calculated by other receptor models (CMB, CMB-Iteration, CMB-GC, PMF, WALSPMF, and NCAPCA), and the results obtained were compared. Ensemble-average source impacts were calculated based on the seven source apportionment results: contributions of secondary sources (28%), crustal dust (20%), coal combustion (18%), vehicle exhaust (17%), SOC (11%), and cement dust (1.3%). The similar results of CMBGC-Iteration and ensemble method indicated that CMBGC-Iteration can produce relatively appropriate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  10. EC assisted start-up experiments reproduction in FTU and AUG for simulations of the ITER case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granucci, G.; Ricci, D.; Farina, D.

    The breakdown and plasma start-up in ITER are well known issues studied in the last few years in many tokamaks with the aid of calculation based on simplified modeling. The thickness of ITER metallic wall and the voltage limits of the Central Solenoid Power Supply strongly limit the maximum toroidal electric field achievable (0.3 V/m), well below the level used in the present generation of tokamaks. In order to have a safe and robust breakdown, the use of Electron Cyclotron Power to assist plasma formation and current rump up has been foreseen. This has raised attention on plasma formation phasemore » in presence of EC wave, especially in order to predict the required power for a robust breakdown in ITER. Few detailed theory studies have been performed up to nowadays, due to the complexity of the problems. A simplified approach, extended from that proposed in ref[1] has been developed including a impurity multispecies distribution and an EC wave propagation and absorption based on GRAY code. This integrated model (BK0D) has been benchmarked on ohmic and EC assisted experiments on FTU and AUG, finding the key aspects for a good reproduction of data. On the basis of this, the simulation has been devoted to understand the best configuration for ITER case. The dependency of impurity distribution content and neutral gas pressure limits has been considered. As results of the analysis a reasonable amount of power (1 - 2 MW) seems to be enough to extend in a significant way the breakdown and current start up capability of ITER. The work reports the FTU data reproduction and the ITER case simulations.« less

  11. Global Contrast Based Salient Region Detection.

    PubMed

    Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min

    2015-03-01

    Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.

  12. Robust Spacecraft Component Detection in Point Clouds.

    PubMed

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  13. Robust Spacecraft Component Detection in Point Clouds

    PubMed Central

    Wei, Quanmao; Jiang, Zhiguo

    2018-01-01

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density. PMID:29561828

  14. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  15. Estimation for the Linear Model With Uncertain Covariance Matrices

    NASA Astrophysics Data System (ADS)

    Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat

    2014-03-01

    We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.

  16. Effect of thick blanket modules on neoclassical tearing mode locking in ITER

    DOE PAGES

    La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.

    2016-11-03

    The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less

  17. Effect of thick blanket modules on neoclassical tearing mode locking in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.

    The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less

  18. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  19. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  20. Overview of the negative ion based neutral beam injectors for ITER.

    PubMed

    Schunke, B; Boilson, D; Chareyre, J; Choi, C-H; Decamps, H; El-Ouazzani, A; Geli, F; Graceffa, J; Hemsworth, R; Kushwah, M; Roux, K; Shah, D; Singh, M; Svensson, L; Urbani, M

    2016-02-01

    The ITER baseline foresees 2 Heating Neutral Beams (HNB's) based on 1 MeV 40 A D(-) negative ion accelerators, each capable of delivering 16.7 MW of deuterium atoms to the DT plasma, with an optional 3rd HNB injector foreseen as a possible upgrade. In addition, a dedicated diagnostic neutral beam will be injecting ≈22 A of H(0) at 100 keV as the probe beam for charge exchange recombination spectroscopy. The integration of the injectors into the ITER plant is nearly finished necessitating only refinements. A large number of components have passed the final design stage, manufacturing has started, and the essential test beds-for the prototype route chosen-will soon be ready to start.

  1. Computed Tomography Imaging of a Hip Prosthesis Using Iterative Model-Based Reconstruction and Orthopaedic Metal Artefact Reduction: A Quantitative Analysis.

    PubMed

    Wellenberg, Ruud H H; Boomsma, Martijn F; van Osch, Jochen A C; Vlassenbroek, Alain; Milles, Julien; Edens, Mireille A; Streekstra, Geert J; Slump, Cornelis H; Maas, Mario

    To quantify the combined use of iterative model-based reconstruction (IMR) and orthopaedic metal artefact reduction (O-MAR) in reducing metal artefacts and improving image quality in a total hip arthroplasty phantom. Scans acquired at several dose levels and kVps were reconstructed with filtered back-projection (FBP), iterative reconstruction (iDose) and IMR, with and without O-MAR. Computed tomography (CT) numbers, noise levels, signal-to-noise-ratios and contrast-to-noise-ratios were analysed. Iterative model-based reconstruction results in overall improved image quality compared to iDose and FBP (P < 0.001). Orthopaedic metal artefact reduction is most effective in reducing severe metal artefacts improving CT number accuracy by 50%, 60%, and 63% (P < 0.05) and reducing noise by 1%, 62%, and 85% (P < 0.001) whereas improving signal-to-noise-ratios by 27%, 47%, and 46% (P < 0.001) and contrast-to-noise-ratios by 16%, 25%, and 19% (P < 0.001) with FBP, iDose, and IMR, respectively. The combined use of IMR and O-MAR strongly improves overall image quality and strongly reduces metal artefacts in the CT imaging of a total hip arthroplasty phantom.

  2. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  3. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  4. Noise models for low counting rate coherent diffraction imaging.

    PubMed

    Godard, Pierre; Allain, Marc; Chamard, Virginie; Rodenburg, John

    2012-11-05

    Coherent diffraction imaging (CDI) is a lens-less microscopy method that extracts the complex-valued exit field from intensity measurements alone. It is of particular importance for microscopy imaging with diffraction set-ups where high quality lenses are not available. The inversion scheme allowing the phase retrieval is based on the use of an iterative algorithm. In this work, we address the question of the choice of the iterative process in the case of data corrupted by photon or electron shot noise. Several noise models are presented and further used within two inversion strategies, the ordered subset and the scaled gradient. Based on analytical and numerical analysis together with Monte-Carlo studies, we show that any physical interpretations drawn from a CDI iterative technique require a detailed understanding of the relationship between the noise model and the used inversion method. We observe that iterative algorithms often assume implicitly a noise model. For low counting rates, each noise model behaves differently. Moreover, the used optimization strategy introduces its own artefacts. Based on this analysis, we develop a hybrid strategy which works efficiently in the absence of an informed initial guess. Our work emphasises issues which should be considered carefully when inverting experimental data.

  5. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  6. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  7. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  8. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  9. Development of laser-based techniques for in situ characterization of the first wall in ITER and future fusion devices

    NASA Astrophysics Data System (ADS)

    Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.

    2013-09-01

    Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.

  10. Negotiating Tensions Between Theory and Design in the Development of Mailings for People Recovering From Acute Coronary Syndrome

    PubMed Central

    Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, JD; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M

    2017-01-01

    Background Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. Objective The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Methods Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users’ needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Results Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users’ feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users’ desires or preferences. We addressed these by (1) identifying root causes for users’ feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients’ self-images. Conclusions Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users’ desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. PMID:28249831

  11. Examinations for leak tightness of actively cooled components in ITER and fusion devices

    NASA Astrophysics Data System (ADS)

    Hirai, T.; Barabash, V.; Carrat, R.; Chappuis, Ph; Durocher, A.; Escourbiac, F.; Merola, M.; Raffray, R.; Worth, L.; Boscary, J.; Chantant, M.; Chuilon, B.; Guilhem, D.; Hatchressian, J.-C.; Hong, S. H.; Kim, K. M.; Masuzaki, S.; Mogaki, K.; Nicolai, D.; Wilson, D.; Yao, D.

    2017-12-01

    Any leak in one of the ITER actively cooled components would cause significant consequences for machine operations; therefore, the risk of leak must be minimized as much as possible. In this paper, the strategy of examination to ensure leak tightness of the ITER internal components (i.e. examination of base materials, vacuum boundary joints and final components) and the hydraulic parameters for ITER internal components are summarized. The experiences of component tests, especially hot helium leak tests in recent fusion devices, were reviewed and the parameters were discussed. Through these experiences, it was confirmed that the hot He leak test was effective to detect small leak paths which were not always possible to detect by volumetric examination due to limited spatial resolution.

  12. Front-end antenna system design for the ITER low-field-side reflectometer system using GENRAY ray tracing.

    PubMed

    Wang, G; Doyle, E J; Peebles, W A

    2016-11-01

    A monostatic antenna array arrangement has been designed for the microwave front-end of the ITER low-field-side reflectometer (LFSR) system. This paper presents details of the antenna coupling coefficient analyses performed using GENRAY, a 3-D ray tracing code, to evaluate the plasma height accommodation capability of such an antenna array design. Utilizing modeled data for the plasma equilibrium and profiles for the ITER baseline and half-field scenarios, a design study was performed for measurement locations varying from the plasma edge to inside the top of the pedestal. A front-end antenna configuration is recommended for the ITER LFSR system based on the results of this coupling analysis.

  13. Improved sensitivity of computed tomography towards iodine and gold nanoparticle contrast agents via iterative reconstruction methods

    PubMed Central

    Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter

    2016-01-01

    Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492

  14. High-resolution CO2 and CH4 flux inverse modeling combining GOSAT, OCO-2 and ground-based observations

    NASA Astrophysics Data System (ADS)

    Maksyutov, S. S.; Oda, T.; Saito, M.; Ito, A.; Janardanan Achari, R.; Sasakawa, M.; Machida, T.; Kaiser, J. W.; Belikov, D.; Valsala, V.; O'Dell, C.; Yoshida, Y.; Matsunaga, T.

    2017-12-01

    We develop a high-resolution CO2 and CH4 flux inversion system that is based on the Lagrangian-Eulerian coupled tracer transport model, and is designed to estimate surface fluxes from atmospheric CO2 and CH4 data observed by the GOSAT and OCO-2 satellites and by global in-situ networks, including observation in Siberia. We use the Lagrangian particle dispersion model (LPDM) FLEXPART to estimate the surface flux footprints for each observation at 0.1-degree spatial resolution for three days of transport. The LPDM is coupled to a global atmospheric tracer transport model (NIES-TM). The adjoint of the coupled transport model is used in an iterative optimization procedure based on either quasi-Newtonian algorithm or singular value decomposition. Combining surface and satellite data for use in inversion requires correcting for biases present in satellite observation data, that is done in a two-step procedure. As a first step, bi-weekly corrections to prior flux fields are estimated for the period of 2009 to 2015 from in-situ CO2 and CH4 data from global observation network, included in Obspack-GVP (for CO2), WDCGG (CH4) and JR-STATION datasets. High-resolution prior fluxes were prepared for anthropogenic emissions (ODIAC and EDGAR), biomass burning (GFAS), and the terrestrial biosphere. The terrestrial biosphere flux was constructed using a vegetation mosaic map and separate simulations of CO2 fluxes by the VISIT model for each vegetation type present in a grid. The prior flux uncertainty for land is scaled proportionally to monthly mean GPP by the MODIS product for CO2 and EDGAR emissions for CH4. Use of the high-resolution transport leads to improved representation of the anthropogenic plumes, often observed at continental continuous observation sites. OCO-2 observations are aggregated to 1 second averages, to match the 0.1 degree resolution of the transport model. Before including satellite observations in the inversion, the monthly varying latitude-dependent bias is estimated by comparing satellite observations with column abundance simulated with surface fluxes optimized by surface inversion. The bias-corrected GOSAT and OCO-2 data are then used in the inversion together with ground-based observations. Application of the bias correction to satellite data reduces the difference between the flux estimates based on ground-based and satellite observations.

  15. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  16. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  17. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  18. Tailored and Integrated Web-Based Tools for Improving Psychosocial Outcomes of Cancer Patients: The DoTTI Development Framework

    PubMed Central

    Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-01-01

    Background Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. Objective The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. Methods The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. Results The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. Conclusions This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases. PMID:24641991

  19. Tailored and integrated Web-based tools for improving psychosocial outcomes of cancer patients: the DoTTI development framework.

    PubMed

    Smits, Rochelle; Bryant, Jamie; Sanson-Fisher, Rob; Tzelepis, Flora; Henskens, Frans; Paul, Christine; Stevenson, William

    2014-03-14

    Effective communication with cancer patients and their families about their disease, treatment options, and possible outcomes may improve psychosocial outcomes. However, traditional approaches to providing information to patients, including verbal information and written booklets, have a number of shortcomings centered on their limited ability to meet patient preferences and literacy levels. New-generation Web-based technologies offer an innovative and pragmatic solution for overcoming these limitations by providing a platform for interactive information seeking, information sharing, and user-centered tailoring. The primary goal of this paper is to discuss the advantages of comprehensive and iterative Web-based technologies for health information provision and propose a four-phase framework for the development of Web-based information tools. The proposed framework draws on our experience of constructing a Web-based information tool for hematological cancer patients and their families. The framework is based on principles for the development and evaluation of complex interventions and draws on the Agile methodology of software programming that emphasizes collaboration and iteration throughout the development process. The DoTTI framework provides a model for a comprehensive and iterative approach to the development of Web-based informational tools for patients. The process involves 4 phases of development: (1) Design and development, (2) Testing early iterations, (3) Testing for effectiveness, and (4) Integration and implementation. At each step, stakeholders (including researchers, clinicians, consumers, and programmers) are engaged in consultations to review progress, provide feedback on versions of the Web-based tool, and based on feedback, determine the appropriate next steps in development. This 4-phase framework is evidence-informed and consumer-centered and could be applied widely to develop Web-based programs for a diverse range of diseases.

  20. The ITER Neutral Beam Test Facility towards SPIDER operation

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Gambetta, G.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Piovan, R.; Recchia, M.; Rizzolo, A.; Sartori, E.; Siragusa, M.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Fröschle, M.; Heinemann, B.; Kraus, W.; Nocentini, R.; Riedl, R.; Schiesko, L.; Wimmer, C.; Wünderlich, D.; Cavenago, M.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Hemsworth, R.

    2017-08-01

    SPIDER is one of two projects of the ITER Neutral Beam Test Facility under construction in Padova, Italy, at the Consorzio RFX premises. It will have a 100 keV beam source with a full-size prototype of the radiofrequency ion source for the ITER neutral beam injector (NBI) and also, similar to the ITER diagnostic neutral beam, it is designed to operate with a pulse length of up to 3600 s, featuring an ITER-like magnetic filter field configuration (for high extraction of negative ions) and caesium oven (for high production of negative ions) layout as well as a wide set of diagnostics. These features will allow a reproduction of the ion source operation in ITER, which cannot be done in any other existing test facility. SPIDER realization is well advanced and the first operation is expected at the beginning of 2018, with the mission of achieving the ITER heating and diagnostic NBI ion source requirements and of improving its performance in terms of reliability and availability. This paper mainly focuses on the preparation of the first SPIDER operations—integration and testing of SPIDER components, completion and implementation of diagnostics and control and formulation of operation and research plan, based on a staged strategy.

  1. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  2. Iterative Coupling of Two Different Enones by Nitromethane Using Bifunctional Thiourea Organocatalysts. Stereocontrolled Assembly of Cyclic and Acyclic Structures.

    PubMed

    Varga, Szilárd; Jakab, Gergely; Csámpai, Antal; Soós, Tibor

    2015-09-18

    An organocatalytic iterative assembly line has been developed in which nitromethane was sequentially coupled with two different enones using a combination of pseudoenantiomeric cinchona-based thiourea catalysts. Application of unsaturated aldehydes and ketones in the second step of the iterative sequence allows the construction of cyclic syn-ketols and acyclic compounds with multiple contiguous stereocenters. The combination of the multifunctional substrates and ambident electrophiles rendered some organocatalytic transformations possible that have not yet been realized in bifunctional noncovalent organocatalysis.

  3. Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.

    2016-10-01

    With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.

  4. Improvements to image quality using hybrid and model-based iterative reconstructions: a phantom study.

    PubMed

    Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus

    2017-01-01

    The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.

  5. Optimism as a Prior Belief about the Probability of Future Reward

    PubMed Central

    Kalra, Aditi; Seriès, Peggy

    2014-01-01

    Optimists hold positive a priori beliefs about the future. In Bayesian statistical theory, a priori beliefs can be overcome by experience. However, optimistic beliefs can at times appear surprisingly resistant to evidence, suggesting that optimism might also influence how new information is selected and learned. Here, we use a novel Pavlovian conditioning task, embedded in a normative framework, to directly assess how trait optimism, as classically measured using self-report questionnaires, influences choices between visual targets, by learning about their association with reward progresses. We find that trait optimism relates to an a priori belief about the likelihood of rewards, but not losses, in our task. Critically, this positive belief behaves like a probabilistic prior, i.e. its influence reduces with increasing experience. Contrary to findings in the literature related to unrealistic optimism and self-beliefs, it does not appear to influence the iterative learning process directly. PMID:24853098

  6. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  7. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  8. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  9. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  10. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  11. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  12. A photoelastic-modulator-based motional Stark effect polarimeter for ITER that is insensitive to polarized broadband background reflections.

    PubMed

    Thorman, A; Michael, C; De Bock, M; Howard, J

    2016-07-01

    A motional Stark effect polarimeter insensitive to polarized broadband light is proposed. Partially polarized background light is anticipated to be a significant source of systematic error for the ITER polarimeter. The proposed polarimeter is based on the standard dual photoelastic modulator approach, but with the introduction of a birefringent delay plate, it generates a sinusoidal spectral filter instead of the usual narrowband filter. The period of the filter is chosen to match the spacing of the orthogonally polarized Stark effect components, thereby increasing the effective signal level, but resulting in the destructive interference of the broadband polarized light. The theoretical response of the system to an ITER like spectrum is calculated and the broadband polarization tolerance is verified experimentally.

  13. Overview of the negative ion based neutral beam injectors for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B., E-mail: email@none.edu; Boilson, D.; Chareyre, J.

    2016-02-15

    The ITER baseline foresees 2 Heating Neutral Beams (HNB’s) based on 1 MeV 40 A D{sup −} negative ion accelerators, each capable of delivering 16.7 MW of deuterium atoms to the DT plasma, with an optional 3rd HNB injector foreseen as a possible upgrade. In addition, a dedicated diagnostic neutral beam will be injecting ≈22 A of H{sup 0} at 100 keV as the probe beam for charge exchange recombination spectroscopy. The integration of the injectors into the ITER plant is nearly finished necessitating only refinements. A large number of components have passed the final design stage, manufacturing has started,more » and the essential test beds—for the prototype route chosen—will soon be ready to start.« less

  14. Nonlinear BCJR equalizer for suppression of intrachannel nonlinearities in 40 Gb/s optical communications systems.

    PubMed

    Djordjevic, Ivan B; Vasic, Bane

    2006-05-29

    A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.

  15. Recent advances in Lanczos-based iterative methods for nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Golub, Gene H.; Nachtigal, Noel M.

    1992-01-01

    In recent years, there has been a true revival of the nonsymmetric Lanczos method. On the one hand, the possible breakdowns in the classical algorithm are now better understood, and so-called look-ahead variants of the Lanczos process have been developed, which remedy this problem. On the other hand, various new Lanczos-based iterative schemes for solving nonsymmetric linear systems have been proposed. This paper gives a survey of some of these recent developments.

  16. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  17. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  18. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  19. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  20. Natural selection of memory-one strategies for the iterated prisoner's dilemma.

    PubMed

    Kraines, D P; Kraines, V Y

    2000-04-21

    In the iterated Prisoner's Dilemma, mutually cooperative behavior can become established through Darwinian natural selection. In simulated interactions of stochastic memory-one strategies for the Iterated Prisoner's Dilemma, Nowak and Sigmund discovered that cooperative agents using a Pavlov (Win-Stay Lose-Switch) type strategy eventually dominate a random population. This emergence follows more directly from a deterministic dynamical system based on differential reproductive success or natural selection. When restricted to an environment of memory-one agents interacting in iterated Prisoner's Dilemma games with a 1% noise level, the Pavlov agent is the only cooperative strategy and one of very few others that cannot be invaded by a similar strategy. Pavlov agents are trusting but no suckers. They will exploit weakness but repent if punished for cheating. Copyright 2000 Academic Press.

  1. Active spectroscopic measurements using the ITER diagnostic system.

    PubMed

    Thomas, D M; Counsell, G; Johnson, D; Vasu, P; Zvonkov, A

    2010-10-01

    Active (beam-based) spectroscopic measurements are intended to provide a number of crucial parameters for the ITER device being built in Cadarache, France. These measurements include the determination of impurity ion temperatures, absolute densities, and velocity profiles, as well as the determination of the plasma current density profile. Because ITER will be the first experiment to study long timescale (∼1 h) fusion burn plasmas, of particular interest is the ability to study the profile of the thermalized helium ash resulting from the slowing down and confinement of the fusion alphas. These measurements will utilize both the 1 MeV heating neutral beams and a dedicated 100 keV hydrogen diagnostic neutral beam. A number of separate instruments are being designed and built by several of the ITER partners to meet the different spectroscopic measurement needs and to provide the maximum physics information. In this paper, we describe the planned measurements, the intended diagnostic ensemble, and we will discuss specific physics and engineering challenges for these measurements in ITER.

  2. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  3. Radioactivity measurements of ITER materials using the TFTR D-T neutron field

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Abdou, M. A.; Barnes, C. W.; Kugel, H. W.

    1994-06-01

    The availability of high D-T fusion neutron yields at TFTR has provided a useful opportunity to directly measure D-T neutron-induced radioactivity in a realistic tokamak fusion reactor environment for materials of vital interest to ITER. These measurements are valuable for characterizing radioactivity in various ITER candidate materials, for validating complex neutron transport calculations, and for meeting fusion reactor licensing requirements. The radioactivity measurements at TFTR involve potential ITER materials including stainless steel 316, vanadium, titanium, chromium, silicon, iron, cobalt, nickel, molybdenum, aluminum, copper, zinc, zirconium, niobium, and tungsten. Small samples of these materials were irradiated close to the plasma and just outside the vacuum vessel wall of TFTR, locations of different neutron energy spectra. Saturation activities for both threshold and capture reactions were measured. Data from dosimetric reactions have been used to obtain preliminary neutron energy spectra. Spectra from the first wall were compared to calculations from ITER and to measurements from accelerator-based tests.

  4. Using Minimum-Surface Bodies for Iteration Space Partitioning

    NASA Technical Reports Server (NTRS)

    Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  5. Assessing Quality in Toddler Classrooms Using the CLASS-Toddler and the ITERS-R

    ERIC Educational Resources Information Center

    La Paro, Karen M.; Williamson, Amy C.; Hatfield, Bridget

    2014-01-01

    Many very young children attend early care and education programs, but current information about the quality of center-based care for toddlers is scarce. Using 2 observation instruments, the Infant/Toddler Environment Rating Scale-Revised (ITERS-R) and the Classroom Assessment Scoring System, Toddler Version (CLASS-Toddler), 93 child care…

  6. On the Developmental Education Radar Screen--2013

    ERIC Educational Resources Information Center

    Paulson, Eric J.

    2013-01-01

    This is the second iteration of the Developmental Education Radar Screen project. As with the first iteration, in 2011, the author uses a "radar screen" metaphor to discuss trends in developmental education based on responses to a series of topics and categories provided by a group of leaders in the educational field. The purpose of this…

  7. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  8. Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses

    NASA Astrophysics Data System (ADS)

    Novak, Roman; Vencelj, Matja¿

    2009-12-01

    The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.

  9. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  10. Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D

    NASA Astrophysics Data System (ADS)

    Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.

    2017-10-01

    A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.

  11. Short-Block Protograph-Based LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  12. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  13. Application of the region-time-length algorithm to study of earthquake precursors in the Thailand-Laos-Myanmar borders

    NASA Astrophysics Data System (ADS)

    Puangjaktha, P.; Pailoplee, S.

    2018-04-01

    In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.

  14. Genovo: De Novo Assembly for Metagenomes

    NASA Astrophysics Data System (ADS)

    Laserson, Jonathan; Jojic, Vladimir; Koller, Daphne

    Next-generation sequencing technologies produce a large number of noisy reads from the DNA in a sample. Metagenomics and population sequencing aim to recover the genomic sequences of the species in the sample, which could be of high diversity. Methods geared towards single sequence reconstruction are not sensitive enough when applied in this setting. We introduce a generative probabilistic model of read generation from environmental samples and present Genovo, a novel de novo sequence assembler that discovers likely sequence reconstructions under the model. A Chinese restaurant process prior accounts for the unknown number of genomes in the sample. Inference is made by applying a series of hill-climbing steps iteratively until convergence. We compare the performance of Genovo to three other short read assembly programs across one synthetic dataset and eight metagenomic datasets created using the 454 platform, the largest of which has 311k reads. Genovo's reconstructions cover more bases and recover more genes than the other methods, and yield a higher assembly score.

  15. Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.

    PubMed

    Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E

    2016-01-01

    Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.

  16. Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets

    PubMed Central

    Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.

    2018-01-01

    Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777

  17. Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.

    2017-02-01

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  18. Development of a video decision aid to inform parents on potential outcomes of extreme prematurity.

    PubMed

    Guillén, Ú; Suh, S; Wang, E; Stickelman, V; Kirpalani, H

    2016-11-01

    The objective of the study is to develop and validate a video-based parental decision aid about the outcomes of extremely premature infants. Thirty-one clinicians and 30 parents of extremely premature infants (<26 weeks gestation) previously underwent semi-structured interviews to assess perceptions of antenatal counseling. Interviewees recommended a video. A video was iteratively developed, with final validation by three groups: clinicians (n=16), parents with a history of extreme prematurity (n=14) and healthy 'naïve' women without prior knowledge of prematurity (n=13). Two iterations of the video were created. Following a simulated counseling session, an eight-question survey and the State-Trait Anxiety Inventory (STAI) were administered to parents and 'naïve' participants to assess usefulness and stress provocation. The final 10-min video shows six children/parent dyads of former 23 to 25 week premature children with a wide range of outcomes. This video was well accepted by clinicians as well as parent and 'naïve' participants, who perceived it as 'balanced' with a 'neutral' message. The video was felt to provide useful information and insight on prematurity. The final version of the video did not induce anxiety: parents STAI-S 36.1±12.1; 'naïve' 30.2±8.9. A short video showing the range of outcomes of extreme prematurity has been produced. It is well accepted and does not increase levels of anxiety as measured by the STAI. This video may be a useful and non-stress-inducing aid at the time of counseling parents facing extreme prematurity.

  19. Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.

    PubMed

    Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A

    2017-02-11

    Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.

  20. Developing a Decision Support System for Tobacco Use Counseling Using Primary Care Physicians

    PubMed Central

    Marcy, Theodore W.; Kaplan, Bonnie; Connolly, Scott W.; Michel, George; Shiffman, Richard N.; Flynn, Brian S.

    2009-01-01

    Background Clinical decision support systems (CDSS) have the potential to improve adherence to guidelines, but only if they are designed to work in the complex environment of ambulatory clinics as otherwise physicians may not use them. Objective To gain input from primary care physicians in designing a CDSS for smoking cessation to ensure that the design is appropriate to a clinical environment before attempts to test this CDSS in a clinical trial. This approach is of general interest to those designing similar systems. Design and Approach We employed an iterative ethnographic process that used multiple evaluation methods to understand physician preferences and workflow integration. Using results from our prior survey of physicians and clinic managers, we developed a prototype CDSS, validated content and design with an expert panel, and then subjected it to usability testing by physicians, followed by iterative design changes based on their feedback. We then performed clinical testing with individual patients, and conducted field tests of the CDSS in two primary care clinics during which four physicians used it for routine patient visits. Results The CDSS prototype was substantially modified through these cycles of usability and clinical testing, including removing a potentially fatal design flaw. During field tests in primary care clinics, physicians incorporated the final CDSS prototype into their workflow, and used it to assist in smoking cessation interventions up to eight times daily. Conclusions A multi-method evaluation process utilizing primary care physicians proved useful for developing a CDSS that was acceptable to physicians and patients, and feasible to use in their clinical environment. PMID:18713526

  1. Fusion Power measurement at ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertalot, L.; Barnsley, R.; Krasilnikov, V.

    2015-07-01

    Nuclear fusion research aims to provide energy for the future in a sustainable way and the ITER project scope is to demonstrate the feasibility of nuclear fusion energy. ITER is a nuclear experimental reactor based on a large scale fusion plasma (tokamak type) device generating Deuterium - Tritium (DT) fusion reactions with emission of 14 MeV neutrons producing up to 700 MW fusion power. The measurement of fusion power, i.e. total neutron emissivity, will play an important role for achieving ITER goals, in particular the fusion gain factor Q related to the reactor performance. Particular attention is given also tomore » the development of the neutron calibration strategy whose main scope is to achieve the required accuracy of 10% for the measurement of fusion power. Neutron Flux Monitors located in diagnostic ports and inside the vacuum vessel will measure ITER total neutron emissivity, expected to range from 1014 n/s in Deuterium - Deuterium (DD) plasmas up to almost 10{sup 21} n/s in DT plasmas. The neutron detection systems as well all other ITER diagnostics have to withstand high nuclear radiation and electromagnetic fields as well ultrahigh vacuum and thermal loads. (authors)« less

  2. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  3. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  4. Layer-Based Approach for Image Pair Fusion.

    PubMed

    Son, Chang-Hwan; Zhang, Xiao-Ping

    2016-04-20

    Recently, image pairs, such as noisy and blurred images or infrared and noisy images, have been considered as a solution to provide high-quality photographs under low lighting conditions. In this paper, a new method for decomposing the image pairs into two layers, i.e., the base layer and the detail layer, is proposed for image pair fusion. In the case of infrared and noisy images, simple naive fusion leads to unsatisfactory results due to the discrepancies in brightness and image structures between the image pair. To address this problem, a local contrast-preserving conversion method is first proposed to create a new base layer of the infrared image, which can have visual appearance similar to another base layer such as the denoised noisy image. Then, a new way of designing three types of detail layers from the given noisy and infrared images is presented. To estimate the noise-free and unknown detail layer from the three designed detail layers, the optimization framework is modeled with residual-based sparsity and patch redundancy priors. To better suppress the noise, an iterative approach that updates the detail layer of the noisy image is adopted via a feedback loop. This proposed layer-based method can also be applied to fuse another noisy and blurred image pair. The experimental results show that the proposed method is effective for solving the image pair fusion problem.

  5. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  6. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  7. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  8. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  9. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  10. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  11. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  12. Strategies for automatic processing of large aftershock sequences

    NASA Astrophysics Data System (ADS)

    Kvaerna, T.; Gibbons, S. J.

    2017-12-01

    Aftershock sequences following major earthquakes present great challenges to seismic bulletin generation. The analyst resources needed to locate events increase with increased event numbers as the quality of underlying, fully automatic, event lists deteriorates. While current pipelines, designed a generation ago, are usually limited to single passes over the raw data, modern systems also allow multiple passes. Processing the raw data from each station currently generates parametric data streams that are later subject to phase-association algorithms which form event hypotheses. We consider a major earthquake scenario and propose to define a region of likely aftershock activity in which we will detect and accurately locate events using a separate, specially targeted, semi-automatic process. This effort may use either pattern detectors or more general algorithms that cover wider source regions without requiring waveform similarity. An iterative procedure to generate automatic bulletins would incorporate all the aftershock event hypotheses generated by the auxiliary process, and filter all phases from these events from the original detection lists prior to a new iteration of the global phase-association algorithm.

  13. Performance evaluation approach for the supercritical helium cold circulators of ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaghela, H.; Sarkar, B.; Bhattacharya, R.

    2014-01-29

    The ITER project design foresees Supercritical Helium (SHe) forced flow cooling for the main cryogenic components, namely, the superconducting (SC) magnets and cryopumps (CP). Therefore, cold circulators have been selected to provide the required SHe mass flow rate to cope with specific operating conditions and technical requirements. Considering the availability impacts of such machines, it has been decided to perform evaluation tests of the cold circulators at operating conditions prior to the series production in order to minimize the project technical risks. A proposal has been conceptualized, evaluated and simulated to perform representative tests of the full scale SHe coldmore » circulators. The objectives of the performance tests include the validation of normal operating condition, transient and off-design operating modes as well as the efficiency measurement. A suitable process and instrumentation diagram of the test valve box (TVB) has been developed to implement the tests at the required thermodynamic conditions. The conceptual engineering design of the TVB has been developed along with the required thermal analysis for the normal operating conditions to support the performance evaluation of the SHe cold circulator.« less

  14. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  15. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  16. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  17. An iterative reconstruction method for high-pitch helical luggage CT

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Zhang, Li; Chen, Zhiqiang; Jin, Xin

    2012-10-01

    X-ray luggage CT is widely used in airports and railway stations for the purpose of detecting contrabands and dangerous goods that may be potential threaten to public safety, playing an important role in homeland security. An X-ray luggage CT is usually in a helical trajectory with a high pitch for achieving a high passing speed of the luggage. The disadvantage of high pitch is that conventional filtered back-projection (FBP) requires a very large slice thickness, leading to bad axial resolution and helical artifacts. Especially when severe data inconsistencies are present in the z-direction, like the ends of a scanning object, the partial volume effect leads to inaccuracy value and may cause a wrong identification. In this paper, an iterative reconstruction method is developed to improve the image quality and accuracy for a large-spacing multi-detector high-pitch helical luggage CT system. In this method, the slice thickness is set to be much smaller than the pitch. Each slice involves projection data collected in a rather small angular range, being an ill-conditioned limited-angle problem. Firstly a low-resolution reconstruction is employed to obtain images, which are used as prior images in the following process. Then iterative reconstruction is performed to obtain high-resolution images. This method enables a high volume coverage speed and a thin reconstruction slice for the helical luggage CT. We validate this method with data collected in a commercial X-ray luggage CT.

  18. Conjugate gradient coupled with multigrid for an indefinite problem

    NASA Technical Reports Server (NTRS)

    Gozani, J.; Nachshon, A.; Turkel, E.

    1984-01-01

    An iterative algorithm for the Helmholtz equation is presented. This scheme was based on the preconditioned conjugate gradient method for the normal equations. The preconditioning is one cycle of a multigrid method for the discrete Laplacian. The smoothing algorithm is red-black Gauss-Seidel and is constructed so it is a symmetric operator. The total number of iterations needed by the algorithm is independent of h. By varying the number of grids, the number of iterations depends only weakly on k when k(3)h(2) is constant. Comparisons with a SSOR preconditioner are presented.

  19. Diagnostics of the ITER neutral beam test facility.

    PubMed

    Pasqualotto, R; Serianni, G; Sonato, P; Agostini, M; Brombin, M; Croci, G; Dalla Palma, M; De Muri, M; Gazza, E; Gorini, G; Pomaro, N; Rizzolo, A; Spolaore, M; Zaniol, B

    2012-02-01

    The ITER heating neutral beam (HNB) injector, based on negative ions accelerated at 1 MV, will be tested and optimized in the SPIDER source and MITICA full injector prototypes, using a set of diagnostics not available on the ITER HNB. The RF source, where the H(-)∕D(-) production is enhanced by cesium evaporation, will be monitored with thermocouples, electrostatic probes, optical emission spectroscopy, cavity ring down, and laser absorption spectroscopy. The beam is analyzed by cooling water calorimetry, a short pulse instrumented calorimeter, beam emission spectroscopy, visible tomography, and neutron imaging. Design of the diagnostic systems is presented.

  20. Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.

    PubMed

    Galgani, F; Cadiou, Y; Gilbert, F

    1992-04-01

    A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.

  1. Iterative method of construction of a bifurcation diagram of autorotation motions for a system with one degree of freedom

    NASA Astrophysics Data System (ADS)

    Klimina, L. A.

    2018-05-01

    The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.

  2. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  3. 3D near-to-surface conductivity reconstruction by inversion of VETEM data using the distorted Born iterative method

    USGS Publications Warehouse

    Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.

    2004-01-01

    Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.

  4. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  5. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  6. The motional Stark effect diagnostic for ITER using a line-shift approach.

    PubMed

    Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E

    2008-10-01

    The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.

  7. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    DTIC Science & Technology

    2013-02-01

    which is based on the iterative coordinate descent (ICD), works by constructing a substitute to the original cost4 at every point, and minimizing this...using Beer’s law. Thus the projection integral corresponding to the ith measurement is given by log ( λD λi ) . There can be cases in which the dosage λD...Inputs: Measurements g, Initial reconstruction f ′, Initial dosage d′, Fraction of entries to reject R %Outputs: Reconstruction f̂ and dosage parameter d̂

  8. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  9. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  10. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  11. Automatic treatment plan re-optimization for adaptive radiotherapy guided with the initial plan DVHs

    NASA Astrophysics Data System (ADS)

    Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Jiang Graves, Yan; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve

    2013-12-01

    Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine. This work was originally presented at the 54th AAPM annual meeting in Charlotte, NC, July 29-August 2, 2012.

  12. Development of CPR security using impact analysis.

    PubMed Central

    Salazar-Kish, J.; Tate, D.; Hall, P. D.; Homa, K.

    2000-01-01

    The HIPAA regulations will require that institutions ensure the prevention of unauthorized access to electronically stored or transmitted patient records. This paper discusses a process for analyzing the impact of security mechanisms on users of computerized patient records through "behind the scenes" electronic access audits. In this way, those impacts can be assessed and refined to an acceptable standard prior to implementation. Through an iterative process of design and evaluation, we develop security algorithms that will protect electronic health information from improper access, alteration or loss, while minimally affecting the flow of work of the user population as a whole. PMID:11079984

  13. Infant/Toddler Environment Rating Scale (ITERS-3). Third Edition

    ERIC Educational Resources Information Center

    Harms, Thelma; Cryer, Debby; Clifford, Richard M.; Yazejian, Noreen

    2017-01-01

    Building on extensive feedback from the field as well as vigorous new research on how best to support infant and toddler development and learning, the authors have revised and updated the widely used "Infant/Toddler Environment Rating Scale." ITERS-3 is the next-generation assessment tool for use in center-based child care programs for…

  14. Incorporating Prototyping and Iteration into Intervention Development: A Case Study of a Dining Hall-Based Intervention

    ERIC Educational Resources Information Center

    McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.

    2013-01-01

    Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…

  15. Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot

    ERIC Educational Resources Information Center

    Magis, David; Facon, Bruno

    2013-01-01

    Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…

  16. Applicability of Kerker preconditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng

    2018-03-01

    The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.

  17. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  18. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  19. Reduction of asymmetric wall force in ITER disruptions with fast current quench

    NASA Astrophysics Data System (ADS)

    Strauss, H.

    2018-02-01

    One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.

  20. Leveraging Anderson Acceleration for improved convergence of iterative solutions to transport systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willert, Jeffrey; Taitano, William T.; Knoll, Dana

    In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less

  1. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  2. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  3. Gaussian mixed model in support of semiglobal matching leveraged by ground control points

    NASA Astrophysics Data System (ADS)

    Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li

    2017-04-01

    Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.

  4. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  5. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  6. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited).

    PubMed

    Huber, A; Brezinsek, S; Mertens, Ph; Schweer, B; Sergienko, G; Terra, A; Arnoux, G; Balshaw, N; Clever, M; Edlingdon, T; Egner, S; Farthing, J; Hartl, M; Horton, L; Kampf, D; Klammer, J; Lambertz, H T; Matthews, G F; Morlock, C; Murari, A; Reindl, M; Riccardo, V; Samm, U; Sanders, S; Stamp, M; Williams, J; Zastrow, K D; Zauner, C

    2012-10-01

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II, C I, C II, C III) with high optical transmittance (≥ 30% in the designed wavelength range) as well as high spatial resolution that is ≤ 2 mm at the object plane and ≤ 3 mm for the full depth of field (± 0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the λ > 0.95 μm range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.

  7. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  8. Development of a mirror-based endoscope for divertor spectroscopy on JET with the new ITER-like wall (invited)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, A.; Brezinsek, S.; Mertens, Ph.

    2012-10-15

    A new endoscope with optimised divertor view has been developed in order to survey and monitor the emission of specific impurities such as tungsten and the remaining carbon as well as beryllium in the tungsten divertor of JET after the implementation of the ITER-like wall in 2011. The endoscope is a prototype for testing an ITER relevant design concept based on reflective optics only. It may be subject to high neutron fluxes as expected in ITER. The operating wavelength range, from 390 nm to 2500 nm, allows the measurements of the emission of all expected impurities (W I, Be II,more » C I, C II, C III) with high optical transmittance ({>=}30% in the designed wavelength range) as well as high spatial resolution that is {<=}2 mm at the object plane and {<=}3 mm for the full depth of field ({+-}0.7 m). The new optical design includes options for in situ calibration of the endoscope transmittance during the experimental campaign, which allows the continuous tracing of possible transmittance degradation with time due to impurity deposition and erosion by fast neutral particles. In parallel to the new optical design, a new type of possibly ITER relevant shutter system based on pneumatic techniques has been developed and integrated into the endoscope head. The endoscope is equipped with four digital CCD cameras, each combined with two filter wheels for narrow band interference and neutral density filters. Additionally, two protection cameras in the {lambda} > 0.95 {mu}m range have been integrated in the optical design for the real time wall protection during the plasma operation of JET.« less

  9. EmbryoMiner: A new framework for interactive knowledge discovery in large-scale cell tracking data of developing embryos.

    PubMed

    Schott, Benjamin; Traub, Manuel; Schlagenhauf, Cornelia; Takamiya, Masanari; Antritter, Thomas; Bartschat, Andreas; Löffler, Katharina; Blessing, Denis; Otte, Jens C; Kobitski, Andrei Y; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf; Stegmaier, Johannes

    2018-04-01

    State-of-the-art light-sheet and confocal microscopes allow recording of entire embryos in 3D and over time (3D+t) for many hours. Fluorescently labeled structures can be segmented and tracked automatically in these terabyte-scale 3D+t images, resulting in thousands of cell migration trajectories that provide detailed insights to large-scale tissue reorganization at the cellular level. Here we present EmbryoMiner, a new interactive open-source framework suitable for in-depth analyses and comparisons of entire embryos, including an extensive set of trajectory features. Starting at the whole-embryo level, the framework can be used to iteratively focus on a region of interest within the embryo, to investigate and test specific trajectory-based hypotheses and to extract quantitative features from the isolated trajectories. Thus, the new framework provides a valuable new way to quantitatively compare corresponding anatomical regions in different embryos that were manually selected based on biological prior knowledge. As a proof of concept, we analyzed 3D+t light-sheet microscopy images of zebrafish embryos, showcasing potential user applications that can be performed using the new framework.

  10. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, amore » novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.« less

  11. Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction

    NASA Astrophysics Data System (ADS)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2017-07-01

    The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.

  12. Statistical Bayesian method for reliability evaluation based on ADT data

    NASA Astrophysics Data System (ADS)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  13. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations.

    PubMed

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-03-19

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity.

  14. Novel Approach for the Recognition and Prediction of Multi-Function Radar Behaviours Based on Predictive State Representations

    PubMed Central

    Ou, Jian; Chen, Yongguang; Zhao, Feng; Liu, Jin; Xiao, Shunping

    2017-01-01

    The extensive applications of multi-function radars (MFRs) have presented a great challenge to the technologies of radar countermeasures (RCMs) and electronic intelligence (ELINT). The recently proposed cognitive electronic warfare (CEW) provides a good solution, whose crux is to perceive present and future MFR behaviours, including the operating modes, waveform parameters, scheduling schemes, etc. Due to the variety and complexity of MFR waveforms, the existing approaches have the drawbacks of inefficiency and weak practicability in prediction. A novel method for MFR behaviour recognition and prediction is proposed based on predictive state representation (PSR). With the proposed approach, operating modes of MFR are recognized by accumulating the predictive states, instead of using fixed transition probabilities that are unavailable in the battlefield. It helps to reduce the dependence of MFR on prior information. And MFR signals can be quickly predicted by iteratively using the predicted observation, avoiding the very large computation brought by the uncertainty of future observations. Simulations with a hypothetical MFR signal sequence in a typical scenario are presented, showing that the proposed methods perform well and efficiently, which attests to their validity. PMID:28335492

  15. Reduce beam hardening artifacts of polychromatic X-ray computed tomography by an iterative approximation approach.

    PubMed

    Shi, Hongli; Yang, Zhi; Luo, Shuqian

    2017-01-01

    The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.

  16. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  17. Framing of scientific knowledge as a new category of health care research.

    PubMed

    Salvador-Carulla, Luis; Fernandez, Ana; Madden, Rosamond; Lukersmith, Sue; Colagiuri, Ruth; Torkfar, Ghazal; Sturmberg, Joachim

    2014-12-01

    The new area of health system research requires a revision of the taxonomy of scientific knowledge that may facilitate a better understanding and representation of complex health phenomena in research discovery, corroboration and implementation. A position paper by an expert group following and iterative approach. 'Scientific evidence' should be differentiated from 'elicited knowledge' of experts and users, and this latter typology should be described beyond the traditional qualitative framework. Within this context 'framing of scientific knowledge' (FSK) is defined as a group of studies of prior expert knowledge specifically aimed at generating formal scientific frames. To be distinguished from other unstructured frames, FSK must be explicit, standardized, based on the available evidence, agreed by a group of experts and subdued to the principles of commensurability, transparency for corroboration and transferability that characterize scientific research. A preliminary typology of scientific framing studies is presented. This typology includes, among others, health declarations, position papers, expert-based clinical guides, conceptual maps, classifications, expert-driven health atlases and expert-driven studies of costs and burden of illness. This grouping of expert-based studies constitutes a different kind of scientific knowledge and should be clearly differentiated from 'evidence' gathered from experimental and observational studies in health system research. © 2014 John Wiley & Sons, Ltd.

  18. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  19. "I need to hear from women who have 'been there'": Developing a woman-focused intervention for drug use and partner violence in the emergency department.

    PubMed

    Choo, Esther; Guthrie, K Morrow; Mello, Michael; Wetle, Terrie F; Ranney, Megan; Tapé, Chantal; Zlotnick, Caron

    2016-04-01

    Addressing violence and linking women to community services in parallel with drug change goals is critical for women with coexisting intimate partner violence (IPV) and substance use disorders (SUD). Our objective was to develop a Web-based intervention to address violence and drug use among women patients in the ED. The intervention was developed in a five-step process: 1) Initial intervention development based on selected theoretical frameworks; 2) In-depth interviews with the target population; 3) Intervention adaptation, with iterative feedback from further interviews; 4) Beta testing and review by an advisory committee of domestic violence advocates; 5) Acceptability and feasibility testing in a small open trial. Themes supported the selection of MI and empowerment models but also guided major adaptations to the intervention, including the introduction of videos and a more robust booster phone call. Participants in the open trial reported high scores for satisfaction, usability, and consistency with essential elements of motivational interviewing. This qualitative work with our target population of women in the ED with SUD experiencing IPV underscored the importance of connection to peers and empathetic human contact. We developed an acceptable and feasible intervention distinct from prior ED-based brief interventions for substance-using populations.

  20. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    PubMed

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  1. Automated segmentation of dental CBCT image with prior-guided sequential random forests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimatemore » the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method for CBCT segmentation.« less

  2. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  3. Multiphysics Engineering Analysis for an Integrated Design of ITER Diagnostic First Wall and Diagnostic Shield Module Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y.; Loesser, G.; Smith, M.

    ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less

  4. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  5. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  6. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  7. Effects of Discovery, Iteration, and Collaboration in Laboratory Courses on Undergraduates' Research Career Intentions Fully Mediated by Student Ownership

    ERIC Educational Resources Information Center

    Corwin, Lisa A.; Runyon, Christopher R.; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C.; Reichler, Stuart; Rodenbusch, Stacia E.; Dolan, Erin L.

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course…

  8. A multigrid LU-SSOR scheme for approximate Newton iteration applied to the Euler equations

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Jameson, Antony

    1986-01-01

    A new efficient relaxation scheme in conjunction with a multigrid method is developed for the Euler equations. The LU SSOR scheme is based on a central difference scheme and does not need flux splitting for Newton iteration. Application to transonic flow shows that the new method surpasses the performance of the LU implicit scheme.

  9. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter

    PubMed Central

    Liu, Wanli

    2017-01-01

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897

  10. Strongly Coupled Fluid-Body Dynamics in the Immersed Boundary Projection Method

    NASA Astrophysics Data System (ADS)

    Wang, Chengjie; Eldredge, Jeff D.

    2014-11-01

    A computational algorithm is developed to simulate dynamically coupled interaction between fluid and rigid bodies. The basic computational framework is built upon a multi-domain immersed boundary method library, whirl, developed in previous work. In this library, the Navier-Stokes equations for incompressible flow are solved on a uniform Cartesian grid by the vorticity-based immersed boundary projection method of Colonius and Taira. A solver for the dynamics of rigid-body systems is also included. The fluid and rigid-body solvers are strongly coupled with an iterative approach based on the block Gauss-Seidel method. Interfacial force, with its intimate connection with the Lagrange multipliers used in the fluid solver, is used as the primary iteration variable. Relaxation, developed from a stability analysis of the iterative scheme, is used to achieve convergence in only 2-4 iterations per time step. Several two- and three-dimensional numerical tests are conducted to validate and demonstrate the method, including flapping of flexible wings, self-excited oscillations of a system of linked plates and three-dimensional propulsion of flexible fluked tail. This work has been supported by AFOSR, under Award FA9550-11-1-0098.

  11. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    PubMed

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly incorporated as prior knowledge to improve the convergence of an iterative approach. In this paper, the feasibility of parametric boundary reconstruction algorithm is demonstrated with both simple and complex simulated objects. Finally, the proposed algorithm is applied to the experimental industrial CT system data.

  12. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  13. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  14. Iterated learning and the evolution of language.

    PubMed

    Kirby, Simon; Griffiths, Tom; Smith, Kenny

    2014-10-01

    Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  16. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  17. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction.

    PubMed

    Kaasalainen, Touko; Palmu, Kirsi; Lampinen, Anniina; Reijonen, Vappu; Leikola, Junnu; Kivisaari, Riku; Kortesniemi, Mika

    2015-09-01

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality.

  18. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    NASA Astrophysics Data System (ADS)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  19. Negotiating Tensions Between Theory and Design in the Development of Mailings for People Recovering From Acute Coronary Syndrome.

    PubMed

    Witteman, Holly O; Presseau, Justin; Nicholas Angl, Emily; Jokhio, Iffat; Schwalm, J D; Grimshaw, Jeremy M; Bosiak, Beth; Natarajan, Madhu K; Ivers, Noah M

    2017-03-01

    Taking all recommended secondary prevention cardiac medications and fully participating in a formal cardiac rehabilitation program significantly reduces mortality and morbidity in the year following a heart attack. However, many people who have had a heart attack stop taking some or all of their recommended medications prematurely and many do not complete a formal cardiac rehabilitation program. The objective of our study was to develop a user-centered, theory-based, scalable intervention of printed educational materials to encourage and support people who have had a heart attack to use recommended secondary prevention cardiac treatments. Prior to the design process, we conducted theory-based interviews and surveys with patients who had had a heart attack to identify key determinants of secondary prevention behaviors. Our interdisciplinary research team then partnered with a patient advisor and design firm to undertake an iterative, theory-informed, user-centered design process to operationalize techniques to address these determinants. User-centered design requires considering users' needs, goals, strengths, limitations, context, and intuitive processes; designing prototypes adapted to users accordingly; observing how potential users respond to the prototype; and using those data to refine the design. To accomplish these tasks, we conducted user research to develop personas (archetypes of potential users), developed a preliminary prototype using behavior change theory to map behavior change techniques to identified determinants of medication adherence, and conducted 2 design cycles, testing materials via think-aloud and semistructured interviews with a total of 11 users (10 patients who had experienced a heart attack and 1 caregiver). We recruited participants at a single cardiac clinic using purposive sampling informed by our personas. We recorded sessions with users and extracted key themes from transcripts. We held interdisciplinary team discussions to interpret findings in the context of relevant theory-based evidence and iteratively adapted the intervention accordingly. Through our iterative development and testing, we identified 3 key tensions: (1) evidence from theory-based studies versus users' feelings, (2) informative versus persuasive communication, and (3) logistical constraints for the intervention versus users' desires or preferences. We addressed these by (1) identifying root causes for users' feelings and addressing those to better incorporate theory- and evidence-based features, (2) accepting that our intervention was ethically justified in being persuasive, and (3) making changes to the intervention where possible, such as attempting to match imagery in the materials to patients' self-images. Theory-informed interventions must be operationalized in ways that fit with user needs. Tensions between users' desires or preferences and health care system goals and constraints must be identified and addressed to the greatest extent possible. A cluster randomized controlled trial of the final intervention is currently underway. ©Holly O Witteman, Justin Presseau, Emily Nicholas Angl, Iffat Jokhio, JD Schwalm, Jeremy M Grimshaw, Beth Bosiak, Madhu K Natarajan, Noah M Ivers. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 01.03.2017.

  20. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

Top