Sample records for standard iterative methods

  1. Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions

    DTIC Science & Technology

    1977-09-01

    Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to

  2. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  3. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  4. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  5. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  6. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  7. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  8. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  9. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  10. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  11. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  12. Evaluation of an iterative model-based reconstruction of pediatric abdominal CT with regard to image quality and radiation dose.

    PubMed

    Aurumskjöld, Marie-Louise; Söderberg, Marcus; Stålhammar, Fredrik; von Steyern, Kristina Vult; Tingberg, Anders; Ydström, Kristina

    2018-06-01

    Background In pediatric patients, computed tomography (CT) is important in the medical chain of diagnosing and monitoring various diseases. Because children are more radiosensitive than adults, they require minimal radiation exposure. One way to achieve this goal is to implement new technical solutions, like iterative reconstruction. Purpose To evaluate the potential of a new, iterative, model-based method for reconstructing (IMR) pediatric abdominal CT at a low radiation dose and determine whether it maintains or improves image quality, compared to the current reconstruction method. Material and Methods Forty pediatric patients underwent abdominal CT. Twenty patients were examined with the standard dose settings and 20 patients were examined with a 32% lower radiation dose. Images from the standard examination were reconstructed with a hybrid iterative reconstruction method (iDose 4 ), and images from the low-dose examinations were reconstructed with both iDose 4 and IMR. Image quality was evaluated subjectively by three observers, according to modified EU image quality criteria, and evaluated objectively based on the noise observed in liver images. Results Visual grading characteristics analyses showed no difference in image quality between the standard dose examination reconstructed with iDose 4 and the low dose examination reconstructed with IMR. IMR showed lower image noise in the liver compared to iDose 4 images. Inter- and intra-observer variance was low: the intraclass coefficient was 0.66 (95% confidence interval = 0.60-0.71) for the three observers. Conclusion IMR provided image quality equivalent or superior to the standard iDose 4 method for evaluating pediatric abdominal CT, even with a 32% dose reduction.

  13. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  14. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  15. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  16. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  17. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    NASA Astrophysics Data System (ADS)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  18. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  19. Leveraging Anderson Acceleration for improved convergence of iterative solutions to transport systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willert, Jeffrey; Taitano, William T.; Knoll, Dana

    In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less

  20. A highly parallel multigrid-like method for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Tuminaro, Ray S.

    1989-01-01

    We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.

  1. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  2. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  3. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  4. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  5. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  6. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  7. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  8. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  9. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  10. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  11. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  12. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  13. Calibration and Data Analysis of the MC-130 Air Balance

    NASA Technical Reports Server (NTRS)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  14. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  15. Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.

    PubMed

    Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M

    2015-10-01

    Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.

  16. Applicability of the iterative technique for cardiac resynchronization therapy optimization: full-disclosure, 50-sequential-patient dataset of transmitral Doppler traces, with implications for future research design and guidelines.

    PubMed

    Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P

    2014-04-01

    Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.

  17. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  18. Analytic approximation for random muffin-tin alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, R.; Gray, L.J.; Kaplan, T.

    1983-03-15

    The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less

  19. Robust Huber-based iterated divided difference filtering with application to cooperative localization of autonomous underwater vehicles.

    PubMed

    Gao, Wei; Liu, Yalong; Xu, Bo

    2014-12-19

    A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.

  20. [Impact to Z-score Mapping of Hyperacute Stroke Images by Computed Tomography in Adaptive Statistical Iterative Reconstruction].

    PubMed

    Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro

    The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  2. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  3. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  4. Non-standard finite difference and Chebyshev collocation methods for solving fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Agarwal, P.; El-Sayed, A. A.

    2018-06-01

    In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.

  5. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  6. Estimating the Minimum Number of Judges Required for Test-Centred Standard Setting on Written Assessments. Do Discussion and Iteration Have an Influence?

    ERIC Educational Resources Information Center

    Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.

    2008-01-01

    Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…

  7. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  8. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    PubMed

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  9. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  10. Generalized Preconditioned Locally Harmonic Residual Eigensolver (GPLHR) v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VECHARYNSKI, EUGENE; YANG, CHAO

    The software contains a MATLAB implementation of the Generalized Preconditioned Locally Harmonic Residual (GPLHR) method for solving standard and generalized non-Hermitian eigenproblems. The method is particularly useful for computing a subset of eigenvalues, and their eigen- or Schur vectors, closest to a given shift. The proposed method is based on block iterations and can take advantage of a preconditioner if it is available. It does not need to perform exact shift-and-invert transformation. Standard and generalized eigenproblems are handled in a unified framework.

  11. A conjugate gradient method for solving the non-LTE line radiation transfer problem

    NASA Astrophysics Data System (ADS)

    Paletou, F.; Anterrieu, E.

    2009-12-01

    This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.

  12. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  13. Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.

    PubMed

    Junker, André; Brenner, Karl-Heinz

    2018-03-01

    The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.

  14. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  15. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  16. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  17. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  18. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  19. A self-adapting system for the automated detection of inter-ictal epileptiform discharges.

    PubMed

    Lodder, Shaun S; van Putten, Michel J A M

    2014-01-01

    Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.

  20. Iteration in Early-Elementary Engineering Design

    NASA Astrophysics Data System (ADS)

    McFarland Kendall, Amber Leigh

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.

  1. An improved genetic algorithm for designing optimal temporal patterns of neural stimulation

    NASA Astrophysics Data System (ADS)

    Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.

    2017-12-01

    Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.

  2. Iterative outlier removal: A method for identifying outliers in laboratory recalibration studies

    PubMed Central

    Parrinello, Christina M.; Grams, Morgan E.; Sang, Yingying; Couper, David; Wruck, Lisa M.; Li, Danni; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef

    2016-01-01

    Background Extreme values that arise for any reason, including through non-laboratory measurement procedure-related processes (inadequate mixing, evaporation, mislabeling), lead to outliers and inflate errors in recalibration studies. We present an approach termed iterative outlier removal (IOR) for identifying such outliers. Methods We previously identified substantial laboratory drift in uric acid measurements in the Atherosclerosis Risk in Communities (ARIC) Study over time. Serum uric acid was originally measured in 1990–92 on a Coulter DACOS instrument using an uricase-based measurement procedure. To recalibrate previous measured concentrations to a newer enzymatic colorimetric measurement procedure, uric acid was re-measured in 200 participants from stored plasma in 2011–13 on a Beckman Olympus 480 autoanalyzer. To conduct IOR, we excluded data points >3 standard deviations (SDs) from the mean difference. We continued this process using the resulting data until no outliers remained. Results IOR detected more outliers and yielded greater precision in simulation. The original mean difference (SD) in uric acid was 1.25 (0.62) mg/dL. After four iterations, 9 outliers were excluded, and the mean difference (SD) was 1.23 (0.45) mg/dL. Conducting only one round of outlier removal (standard approach) would have excluded 4 outliers (mean difference [SD] = 1.22 [0.51] mg/dL). Applying the recalibration (derived from Deming regression) from each approach to the original measurements, the prevalence of hyperuricemia (>7 mg/dL) was 28.5% before IOR and 8.5% after IOR. Conclusion IOR is a useful method for removal of extreme outliers irrelevant to recalibrating laboratory measurements, and identifies more extraneous outliers than the standard approach. PMID:27197675

  3. Application of Conjugate Gradient methods to tidal simulation

    USGS Publications Warehouse

    Barragy, E.; Carey, G.F.; Walters, R.A.

    1993-01-01

    A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.

  4. A method for the dynamic and thermal stress analysis of space shuttle surface insulation

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1975-01-01

    The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.

  5. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  6. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  7. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  8. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  9. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  10. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  11. An iterative method for airway segmentation using multiscale leakage detection

    NASA Astrophysics Data System (ADS)

    Nadeem, Syed Ahmed; Jin, Dakai; Hoffman, Eric A.; Saha, Punam K.

    2017-02-01

    There are growing applications of quantitative computed tomography for assessment of pulmonary diseases by characterizing lung parenchyma as well as the bronchial tree. Many large multi-center studies incorporating lung imaging as a study component are interested in phenotypes relating airway branching patterns, wall-thickness, and other morphological measures. To our knowledge, there are no fully automated airway tree segmentation methods, free of the need for user review. Even when there are failures in a small fraction of segmentation results, the airway tree masks must be manually reviewed for all results which is laborious considering that several thousands of image data sets are evaluated in large studies. In this paper, we present a CT-based novel airway tree segmentation algorithm using iterative multi-scale leakage detection, freezing, and active seed detection. The method is fully automated requiring no manual inputs or post-segmentation editing. It uses simple intensity based connectivity and a new leakage detection algorithm to iteratively grow an airway tree starting from an initial seed inside the trachea. It begins with a conservative threshold and then, iteratively shifts toward generous values. The method was applied on chest CT scans of ten non-smoking subjects at total lung capacity and ten at functional residual capacity. Airway segmentation results were compared to an expert's manually edited segmentations. Branch level accuracy of the new segmentation method was examined along five standardized segmental airway paths (RB1, RB4, RB10, LB1, LB10) and two generations beyond these branches. The method successfully detected all branches up to two generations beyond these segmental bronchi with no visual leakages.

  12. Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine.

    PubMed

    Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang

    2014-01-27

    Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.

  13. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  14. IDC Re-Engineering Phase 2 Glossary Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, Christopher J.; Harris, James M.

    2017-01-01

    This document contains the glossary of terms used for the IDC Re-Engineering Phase 2 project. This version was created for Iteration E3. The IDC applies automatic processing methods in order to produce, archive, and distribute standard IDC products on behalf of all States Parties.

  15. ITERATIVE SOLUTION OF A COUPLED MIXED AND STANDARD GALERKIN DISCRETIZATION METHOD FOR ELLIPTIC PROBLEMS. (R825207)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  16. GPU-accelerated iterative reconstruction from Compton scattered data using a matched pair of conic projector and backprojector.

    PubMed

    Nguyen, Van-Giang; Lee, Soo-Jin

    2016-07-01

    Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306

  18. A Self-Adapting System for the Automated Detection of Inter-Ictal Epileptiform Discharges

    PubMed Central

    Lodder, Shaun S.; van Putten, Michel J. A. M.

    2014-01-01

    Purpose Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Methods Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form “IED nominations”, each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Key Findings Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20–30 min recordings 1took approximately 5 min. Significance The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents. PMID:24454813

  19. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  20. Solving Rational Expectations Models Using Excel

    ERIC Educational Resources Information Center

    Strulik, Holger

    2004-01-01

    Simple problems of discrete-time optimal control can be solved using a standard spreadsheet software. The employed-solution method of backward iteration is intuitively understandable, does not require any programming skills, and is easy to implement so that it is suitable for classroom exercises with rational-expectations models. The author…

  1. Multiscale Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.

    2015-01-01

    Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462

  2. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  3. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  5. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  6. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  7. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  8. SRS modeling in high power CW fiber lasers for component optimization

    NASA Astrophysics Data System (ADS)

    Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.

    2017-02-01

    A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.

  9. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  10. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  11. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  12. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  13. Impact of iterative metal artifact reduction on diagnostic image quality in patients with dental hardware.

    PubMed

    Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian

    2017-03-01

    Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.

  14. Standard and Robust Methods in Regression Imputation

    ERIC Educational Resources Information Center

    Moraveji, Behjat; Jafarian, Koorosh

    2014-01-01

    The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…

  15. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  16. Preconditioned MoM Solutions for Complex Planar Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Jackson, D; Champagne, N

    2004-01-23

    The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less

  17. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  18. Photoacoustic image reconstruction via deep learning

    NASA Astrophysics Data System (ADS)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  19. Advanced Data Acquisition System Implementation for the ITER Neutron Diagnostic Use Case Using EPICS and FlexRIO Technology on a PXIe Platform

    NASA Astrophysics Data System (ADS)

    Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.

    2016-04-01

    To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.

  20. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  1. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  2. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  3. Reduced Radiation Dose with Model-based Iterative Reconstruction versus Standard Dose with Adaptive Statistical Iterative Reconstruction in Abdominal CT for Diagnosis of Acute Renal Colic.

    PubMed

    Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis

    2015-07-01

    To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.

  4. Parallel fast multipole boundary element method applied to computational homogenization

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2018-01-01

    In the present work, a fast multipole boundary element method (FMBEM) and a parallel computer code for 3D elasticity problem is developed and applied to the computational homogenization of a solid containing spherical voids. The system of equation is solved by using the GMRES iterative solver. The boundary of the body is dicretized by using the quadrilateral serendipity elements with an adaptive numerical integration. Operations related to a single GMRES iteration, performed by traversing the corresponding tree structure upwards and downwards, are parallelized by using the OpenMP standard. The assignment of tasks to threads is based on the assumption that the tree nodes at which the moment transformations are initialized can be partitioned into disjoint sets of equal or approximately equal size and assigned to the threads. The achieved speedup as a function of number of threads is examined.

  5. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  6. Methods for Large-Scale Nonlinear Optimization.

    DTIC Science & Technology

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  7. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  8. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  9. Variational iteration method — a promising technique for constructing equivalent integral equations of fractional order

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru

    2013-10-01

    The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.

  10. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  11. Multiscale reconstruction for MR fingerprinting.

    PubMed

    Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A

    2016-06-01

    To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  12. Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment

    PubMed Central

    Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.

    2015-01-01

    We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700

  13. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  14. Parallel Implementation of 3-D Iterative Reconstruction With Intra-Thread Update for the jPET-D4

    NASA Astrophysics Data System (ADS)

    Lam, Chih Fung; Yamaya, Taiga; Obi, Takashi; Yoshida, Eiji; Inadama, Naoko; Shibuya, Kengo; Nishikido, Fumihiko; Murayama, Hideo

    2009-02-01

    One way to speed-up iterative image reconstruction is by parallel computing with a computer cluster. However, as the number of computing threads increases, parallel efficiency decreases due to network transfer delay. In this paper, we proposed a method to reduce data transfer between computing threads by introducing an intra-thread update. The update factor is collected from each slave thread and a global image is updated as usual in the first K sub-iteration. In the rest of the sub-iterations, the global image is only updated at an interval which is controlled by a parameter L. In between that interval, the intra-thread update is carried out whereby an image update is performed in each slave thread locally. We investigated combinations of K and L parameters based on parallel implementation of RAMLA for the jPET-D4 scanner. Our evaluation used four workstations with a total of 16 slave threads. Each slave thread calculated a different set of LORs which are divided according to ring difference numbers. We assessed image quality of the proposed method with a hotspot simulation phantom. The figure of merit was the full-width-half-maximum of hotspots and the background normalized standard deviation. At an optimum K and L setting, we did not find significant change in the output images. We also applied the proposed method to a Hoffman phantom experiment and found the difference due to intra-thread update was negligible. With the intra-thread update, computation time could be reduced by about 23%.

  15. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  16. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  17. Shutdown Dose Rate Analysis Using the Multi-Step CADIS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.

    2015-01-01

    The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) hybrid Monte Carlo (MC)/deterministic radiation transport method was proposed to speed up the shutdown dose rate (SDDR) neutron MC calculation using an importance function that represents the neutron importance to the final SDDR. This work applied the MS-CADIS method to the ITER SDDR benchmark problem. The MS-CADIS method was also used to calculate the SDDR uncertainty resulting from uncertainties in the MC neutron calculation and to determine the degree of undersampling in SDDR calculations because of the limited ability of the MC method to tally detailed spatial and energy distributions. The analysismore » that used the ITER benchmark problem compared the efficiency of the MS-CADIS method to the traditional approach of using global MC variance reduction techniques for speeding up SDDR neutron MC calculation. Compared to the standard Forward-Weighted-CADIS (FW-CADIS) method, the MS-CADIS method increased the efficiency of the SDDR neutron MC calculation by 69%. The MS-CADIS method also increased the fraction of nonzero scoring mesh tally elements in the space-energy regions of high importance to the final SDDR.« less

  18. Evaluation of an iterative model-based CT reconstruction algorithm by intra-patient comparison of standard and ultra-low-dose examinations.

    PubMed

    Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A

    2018-01-01

    Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.

  19. Solution of Cubic Equations by Iteration Methods on a Pocket Calculator

    ERIC Educational Resources Information Center

    Bamdad, Farzad

    2004-01-01

    A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.

  20. A 3D finite-difference BiCG iterative solver with the Fourier-Jacobi preconditioner for the anisotropic EIT/EEG forward problem.

    PubMed

    Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D

    2014-01-01

    The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.

  1. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    NASA Astrophysics Data System (ADS)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  2. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  3. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  4. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  5. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  6. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  7. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  8. Gyrokinetic equations and full f solution method based on Dirac's constrained Hamiltonian and inverse Kruskal iteration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heikkinen, J. A.; Nora, M.

    2011-02-15

    Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J= as one of the averaging coordinates in phase space, provides an alternative to the standard gyrokinetics. Within second order in gyrokinetic parameter, the new equations do not show explicit ponderomotivelike or polarizationlike terms. Pullback of particle information with an iterated gyrophase and field dependent gyroradius function from the gyrocenter position defined by gyroaveraged coordinates allowsmore » direct numerical integration of the gyrokinetic equations in particle simulation of the field and particles with full distribution function. As an example, gyrokinetic systems with polarization drift either present or absent in the equations of motion are considered.« less

  9. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  10. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  11. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare

    PubMed Central

    Taylor, Michael J; McNicholas, Chris; Nicolay, Chris; Darzi, Ara; Bell, Derek; Reed, Julie E

    2014-01-01

    Background Plan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework. Methods NHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time. Results 73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles. Discussion To progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs. PMID:24025320

  12. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  13. Iterative demodulation and decoding of coded non-square QAM

    NASA Technical Reports Server (NTRS)

    Li, L.; Divsalar, D.; Dolinar, S.

    2003-01-01

    Simulation results show that, with iterative demodulation and decoding, coded NS-8QAM performs 0.5 dB better than standard 8QAM and 0.7 dB better than 8PSK at BER= 10(sup -5), when the FEC code is the (15, 11) Hamming code concatenated with a rate-1 accumulator code, while coded NS-32QAM performs 0.25 dB better than standard 32QAM.

  14. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  15. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  16. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  17. Orthogonal analytical methods for botanical standardization: Determination of green tea catechins by qNMR and LC-MS/MS

    PubMed Central

    Napolitano, José G.; Gödecke, Tanja; Lankin, David C.; Jaki, Birgit U.; McAlpine, James B.; Chen, Shao-Nong; Pauli, Guido F.

    2013-01-01

    The development of analytical methods for parallel characterization of multiple phytoconstituents is essential to advance the quality control of herbal products. While chemical standardization is commonly carried out by targeted analysis using gas or liquid chromatography-based methods, more universal approaches based on quantitative 1H NMR (qHNMR) measurements are being used increasingly in the multi-targeted assessment of these complex mixtures. The present study describes the development of a 1D qHNMR-based method for simultaneous identification and quantification of green tea constituents. This approach utilizes computer-assisted 1H iterative Full Spin Analysis (HiFSA) and enables rapid profiling of seven catechins in commercial green tea extracts. The qHNMR results were cross-validated against quantitative profiles obtained with an orthogonal LC-MS/MS method. The relative strengths and weaknesses of both approaches are discussed, with special emphasis on the role of identical reference standards in qualitative and quantitative analyses. PMID:23870106

  18. Rest-wavelength fiducials for the ITER core imaging x-ray spectrometer.

    PubMed

    Beiersdorfer, P; Brown, G V; Graf, A T; Bitter, M; Hill, K W; Kelley, R L; Kilbourne, C A; Leutenegger, M A; Porter, F S

    2012-10-01

    Absolute wavelength references are needed to derive the plasma velocities from the Doppler shift of a given line emitted by a moving plasma. We show that such reference standards exist for the strongest x-ray line in neonlike W(64+), which has become the line of choice for the ITER (Latin "the way") core imaging x-ray spectrometer. Close-by standards are the Hf Lβ(3) line and the Ir Lα(2) line, which bracket the W(64+) line by ±30 eV; other standards are given by the Ir Lα(1) and Lα(2) lines and the Hf Lβ(1) and Lβ(2) lines, which bracket the W(64+) line by ±40 and ±160 eV, respectively. The reference standards can be produced by an x-ray tube built into the ITER spectrometer. We present spectra of the reference lines obtained with an x-ray microcalorimeter and compare them to spectra of the W(64+) line obtained both with an x-ray microcalorimeter and a crystal spectrometer.

  19. Rest-wavelength Fiducials for the ITER Core Imaging X-ray Spectrometer

    NASA Technical Reports Server (NTRS)

    Beiersdorfer, P.; Brown, G. V.; Graf, A. T.; Bitter, M.; Hill, K. W.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; Porter, F. S.

    2012-01-01

    Absolute wavelength references are needed to derive the plasma velocities from the Doppler shift of a given line emitted by a moving plasma. We show that such reference standards exist for the strongest x-ray line in neonlike W64+, which has become the line of choice for the ITER (Latin the way) core imaging x-ray spectrometer. Close-by standards are the Hf L3 line and the Ir L2 line, which bracket the W64+ line by 30 eV; other standards are given by the Ir L1 and L2 lines and the Hf L1 and L2 lines, which bracket the W64+ line by 40 and 160 eV, respectively. The reference standards can be produced by an x-ray tube built into the ITER spectrometer. We present spectra of the reference lines obtained with an x-ray microcalorimeter and compare them to spectra of the W64+ line obtained both with an x-ray microcalorimeter and a crystal spectrometer

  20. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  1. SU-G-IeP2-12: The Effect of Iterative Reconstruction and CT Tube Voltage On Hounsfield Unit Values of Iodinated Contrast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogden, K; Greene-Donnelly, K; Vallabhaneni, D

    Purpose: To investigate the effects of changing iterative reconstruction strength and tube voltage on Hounsfield Unit (HU) values of varying concentrations of Iodinated contrast medium in a phantom. Method: Iodinated contrast (Omnipaque 300, GE Healthcare, Princeton NJ) was diluted with distilled water to concentrations of 0.6, 0.9, 1.8, 3.6, 7.2, and 10.8 mg/mL of Iodine. The solutions were scanned in a patient equivalent water phantom on two MDCT scanners: VCT 64 slice (GE Medical Systems, Waukesha, WI) and an Aquilion One 320 slice scanner (Toshiba America Medical Systems, Tustin CA). The phantom was scanned at 80, 100, 120, 140 kVmore » using 400, 255, 180, and 130 mAs, respectively, for the VCT scanner, and 80, 100, 120, and 135 kV using 400, 250, 200, and 150 mAs, respectively, on the Aquilion One. Images were reconstructed at 2.5 mm (VCT) and 0.5 mm (Aquilion One). The VCT images were reconstructed using Advanced Statistical Iterative Reconstruction (ASIR) at 6 different strengths: 0%, 20%, 40%, 60%, 80%, and 100%. Aquilion One images were reconstructed using Adaptive Iterative Dose Reduction (AIDR) at 4 strengths: no AIDR, Weak AIDR, Standard AIDR, and Strong AIDR. Regions of interest (ROIs) were drawn on the images to measure the HU values and standard deviations of the diluted contrast. Second order polynomials were used to fit the HU values as a function of Iodine concentration. Results: For both scanners, there was no significant effect of changing the iterative reconstruction strength. The polynomial fits yielded goodness-of-fit (R2) values averaging 0.997. Conclusion: Changing the strength of the iterative reconstruction has no significant effect on the HU values of Iodinated contrast in a tissue-equivalent phantom. Fit values of HU vs Iodine concentration are useful in quantitative imaging protocols such as the determination of cardiac output from time-density curves in the main pulmonary artery.« less

  2. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  3. Annual Copper Mountain Conferences on Multigrid and Iterative Methods, Copper Mountain, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCormick, Stephen F.

    This project supported the Copper Mountain Conference on Multigrid and Iterative Methods, held from 2007 to 2015, at Copper Mountain, Colorado. The subject of the Copper Mountain Conference Series alternated between Multigrid Methods in odd-numbered years and Iterative Methods in even-numbered years. Begun in 1983, the Series represents an important forum for the exchange of ideas in these two closely related fields. This report describes the Copper Mountain Conference on Multigrid and Iterative Methods, 2007-2015. Information on the conference series is available at http://grandmaster.colorado.edu/~copper/.

  4. A sequential quadratic programming algorithm using an incomplete solution of the subproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, W.; Prieto, F.J.

    1993-05-01

    We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore » assumed that the iterates lie on a compact set.« less

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  6. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  7. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  8. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  9. Comparing direct and iterative equation solvers in a large structural analysis software system

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  10. Upwind relaxation methods for the Navier-Stokes equations using inner iterations

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.

    1992-01-01

    A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.

  11. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  12. Comparison of computational to human observer detection for evaluation of CT low dose iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.

    2014-03-01

    Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.

  13. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  14. Iterative discrete ordinates solution of the equation for surface-reflected radiance

    NASA Astrophysics Data System (ADS)

    Radkevich, Alexander

    2017-11-01

    This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.

  15. Improving the iterative Linear Interaction Energy approach using automated recognition of configurational transitions.

    PubMed

    Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P

    2016-01-01

    Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.

  16. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    NASA Astrophysics Data System (ADS)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  17. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion

    NASA Astrophysics Data System (ADS)

    Jakobsen, M.; Wu, R. S.

    2016-12-01

    Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.

  19. A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem

    DTIC Science & Technology

    2013-02-01

    obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where

  20. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners formore » solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the problem of evaluating f(A)v which arises in statistical sampling. 11. As an application to the methods we developed, we tackled the problem of computing the diagonal of the inverse of a matrix. This arises in statistical applications as well as in many applications in physics. We explored probing methods as well as domain-decomposition type methods. 12. A collaboration with researchers from Toulouse, France, considered the important problem of computing the Schur complement in a domain-decomposition approach. 13. We explored new ways of preconditioning linear systems, based on low-rank approximations.« less

  1. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  2. Multigrid methods in structural mechanics

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.

    1986-01-01

    Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.

  3. Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  4. Evolutionary Software Development (Developpement evolutionnaire de logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  5. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Guoyan

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less

  6. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  7. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  8. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  9. A Novel Particle Swarm Optimization Algorithm for Global Optimization

    PubMed Central

    Wang, Chun-Feng; Liu, Kui

    2016-01-01

    Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387

  10. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  11. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  12. Evaluation of MLACF based calculated attenuation brain PET imaging for FDG patient studies

    NASA Astrophysics Data System (ADS)

    Bal, Harshali; Panin, Vladimir Y.; Platsch, Guenther; Defrise, Michel; Hayden, Charles; Hutton, Chloe; Serrano, Benjamin; Paulmier, Benoit; Casey, Michael E.

    2017-04-01

    Calculating attenuation correction for brain PET imaging rather than using CT presents opportunities for low radiation dose applications such as pediatric imaging and serial scans to monitor disease progression. Our goal is to evaluate the iterative time-of-flight based maximum-likelihood activity and attenuation correction factors estimation (MLACF) method for clinical FDG brain PET imaging. FDG PET/CT brain studies were performed in 57 patients using the Biograph mCT (Siemens) four-ring scanner. The time-of-flight PET sinograms were acquired using the standard clinical protocol consisting of a CT scan followed by 10 min of single-bed PET acquisition. Images were reconstructed using CT-based attenuation correction (CTAC) and used as a gold standard for comparison. Two methods were compared with respect to CTAC: a calculated brain attenuation correction (CBAC) and MLACF based PET reconstruction. Plane-by-plane scaling was performed for MLACF images in order to fix the variable axial scaling observed. The noise structure of the MLACF images was different compared to those obtained using CTAC and the reconstruction required a higher number of iterations to obtain comparable image quality. To analyze the pooled data, each dataset was registered to a standard template and standard regions of interest were extracted. An SUVr analysis of the brain regions of interest showed that CBAC and MLACF were each well correlated with CTAC SUVrs. A plane-by-plane error analysis indicated that there were local differences for both CBAC and MLACF images with respect to CTAC. Mean relative error in the standard regions of interest was less than 5% for both methods and the mean absolute relative errors for both methods were similar (3.4%  ±  3.1% for CBAC and 3.5%  ±  3.1% for MLACF). However, the MLACF method recovered activity adjoining the frontal sinus regions more accurately than CBAC method. The use of plane-by-plane scaling of MLACF images was found to be a crucial step in order to obtain improved activity estimates. Presence of local errors in both MLACF and CBAC based reconstructions would require the use of a normal database for clinical assessment. However, further work is required in order to assess the clinical advantage of MLACF over CBAC based method.

  13. X-ray computed tomography using curvelet sparse regularization.

    PubMed

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  14. Reverse engineering of integrated circuits

    DOEpatents

    Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.

    2003-01-01

    Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.

  15. Application of the user-centred design process according ISO 9241-210 in air traffic control.

    PubMed

    König, Christina; Hofmann, Thomas; Bruder, Ralph

    2012-01-01

    Designing a usable human machine interface for air traffic control is challenging and should follow approved methods. The ISO 9241-210 standard promises high usability of products by integrating future users and following an iterative process. This contribution describes the proceeding and first results of the analysis and application of ISO 9241-210 to develop a planning tool for air traffic controllers.

  16. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.

  17. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  18. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  19. Fine‐resolution conservation planning with limited climate‐change information

    USGS Publications Warehouse

    Shah, Payal; Mallory, Mindy L.; Ando , Amy W.; Guntenspergen, Glenn R.

    2017-01-01

    Climate‐change induced uncertainties in future spatial patterns of conservation‐related outcomes make it difficult to implement standard conservation‐planning paradigms. A recent study translates Markowitz's risk‐diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate‐change scenarios for carrying out fine‐resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk‐return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate‐change information and full climate‐change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate‐change forecasts such that the best possible risk‐return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate‐change information could be reduced by 17% relative to other iterative approaches.

  20. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  1. Experiments on sparsity assisted phase retrieval of phase objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Lochab, Priyanka; Khare, Kedar

    2017-05-01

    Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.

  2. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  3. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  4. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  5. Practice management/role of the medical director.

    PubMed

    Merrill, Douglas G

    2014-06-01

    Although the nature of ambulatory surgery has changed over the years, the ideal role of the medical director mirrors its earliest iterations, focusing on excellent customer service and high quality of care. These efforts are supported by 3 modern methods of quality management borrowed from industry: intentional process improvement, standard care pathways, and monitoring outcomes to determine the efficacy of each. These methods are critical to master in order to lead the facility and providers to the highest quality of care and service. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Performance analysis of Rogowski coils and the measurement of the total toroidal current in the ITER machine

    NASA Astrophysics Data System (ADS)

    Quercia, A.; Albanese, R.; Fresa, R.; Minucci, S.; Arshad, S.; Vayakis, G.

    2017-12-01

    The paper carries out a comprehensive study of the performances of Rogowski coils. It describes methodologies that were developed in order to assess the capabilities of the Continuous External Rogowski (CER), which measures the total toroidal current in the ITER machine. Even though the paper mainly considers the CER, the contents are general and relevant to any Rogowski sensor. The CER consists of two concentric helical coils which are wound along a complex closed path. Modelling and computational activities were performed to quantify the measurement errors, taking detailed account of the ITER environment. The geometrical complexity of the sensor is accurately accounted for and the standard model which provides the classical expression to compute the flux linkage of Rogowski sensors is quantitatively validated. Then, in order to take into account the non-ideality of the winding, a generalized expression, formally analogue to the classical one, is presented. Models to determine the worst case and the statistical measurement accuracies are hence provided. The following sources of error are considered: effect of the joints, disturbances due to external sources of field (the currents flowing in the poloidal field coils and the ferromagnetic inserts of ITER), deviations from ideal geometry, toroidal field variations, calibration, noise and integration drift. The proposed methods are applied to the measurement error of the CER, in particular in its high and low operating ranges, as prescribed by the ITER system design description documents, and during transients, which highlight the large time constant related to the shielding of the vacuum vessel. The analyses presented in the paper show that the design of the CER diagnostic is capable of achieving the requisite performance as needed for the operation of the ITER machine.

  7. Relative Displacement Method for Track-Structure Interaction

    PubMed Central

    Ramos, Óscar Ramón; Pantaleón, Marcos J.

    2014-01-01

    The track-structure interaction effects are usually analysed with conventional FEM programs, where it is difficult to implement the complex track-structure connection behaviour, which is nonlinear, elastic-plastic and depends on the vertical load. The authors developed an alternative analysis method, which they call the relative displacement method. It is based on the calculation of deformation states in single DOF element models that satisfy the boundary conditions. For its solution, an iterative optimisation algorithm is used. This method can be implemented in any programming language or analysis software. A comparison with ABAQUS calculations shows a very good result correlation and compliance with the standard's specifications. PMID:24634610

  8. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    PubMed Central

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954

  9. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  10. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  11. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  12. Fetoscopic Open Neural Tube Defect Repair: Development and Refinement of a Two-Port, Carbon Dioxide Insufflation Technique.

    PubMed

    Belfort, Michael A; Whitehead, William E; Shamshirsaz, Alireza A; Bateni, Zhoobin H; Olutoye, Oluyinka O; Olutoye, Olutoyin A; Mann, David G; Espinoza, Jimmy; Williams, Erin; Lee, Timothy C; Keswani, Sundeep G; Ayres, Nancy; Cassady, Christopher I; Mehollin-Ray, Amy R; Sanz Cortes, Magdalena; Carreras, Elena; Peiro, Jose L; Ruano, Rodrigo; Cass, Darrell L

    2017-04-01

    To describe development of a two-port fetoscopic technique for spina bifida repair in the exteriorized, carbon dioxide-filled uterus and report early results of two cohorts of patients: the first 15 treated with an iterative technique and the latter 13 with a standardized technique. This was a retrospective cohort study (2014-2016). All patients met Management of Myelomeningocele Study selection criteria. The intraoperative approach was iterative in the first 15 patients and was then standardized. Obstetric, maternal, fetal, and early neonatal outcomes were compared. Standard parametric and nonparametric tests were used as appropriate. Data for 28 patients (22 endoscopic only, four hybrid, two abandoned) are reported, but only those with a complete fetoscopic repair were analyzed (iterative technique [n=10] compared with standardized technique [n=12]). Maternal demographics and gestational age (median [range]) at fetal surgery (25.4 [22.9-25.9] compared with 24.8 [24-25.6] weeks) were similar, but delivery occurred at 35.9 (26-39) weeks of gestation with the iterative technique compared with 39 (35.9-40) weeks of gestation with the standardized technique (P<.01). Duration of surgery (267 [107-434] compared with 246 [206-333] minutes), complication rates, preterm prelabor rupture of membranes rates (4/12 [33%] compared with 1/10 [10%]), and vaginal delivery rates (5/12 [42%] compared with 6/10 [60%]) were not statistically different in the iterative and standardized techniques, respectively. In 6 of 12 (50%) compared with 1 of 10 (10%), respectively (P=.07), there was leakage of cerebrospinal fluid from the repair site at birth. Management of Myelomeningocele Study criteria for hydrocephalus-death at discharge were met in 9 of 12 (75%) and 3 of 10 (30%), respectively, and 7 of 12 (58%) compared with 2 of 10 (20%) have been treated for hydrocephalus to date. These latter differences were not statistically significant. Fetoscopic open neural tube defect repair does not appear to increase maternal-fetal complications as compared with repair by hysterotomy, allows for vaginal delivery, and may reduce long-term maternal risks. ClinicalTrials.gov, https://clinicaltrials.gov, NCT02230072.

  13. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  14. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  15. Model-based Iterative Reconstruction: Effect on Patient Radiation Dose and Image Quality in Pediatric Body CT

    PubMed Central

    Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.

    2014-01-01

    Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359

  16. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  17. SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyu, Q; Li, B; Southern Medical University, Guangzhou

    2015-06-15

    Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less

  18. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  19. Determining partial differential cross sections for low-energy electron photodetachment involving conical intersections using the solution of a Lippmann-Schwinger equation constructed with standard electronic structure techniques.

    PubMed

    Han, Seungsuk; Yarkony, David R

    2011-05-07

    A method for obtaining partial differential cross sections for low energy electron photodetachment in which the electronic states of the residual molecule are strongly coupled by conical intersections is reported. The method is based on the iterative solution to a Lippmann-Schwinger equation, using a zeroth order Hamiltonian consisting of the bound nonadiabatically coupled residual molecule and a free electron. The solution to the Lippmann-Schwinger equation involves only standard electronic structure techniques and a standard three-dimensional free particle Green's function quadrature for which fast techniques exist. The transition dipole moment for electron photodetachment, is a sum of matrix elements each involving one nonorthogonal orbital obtained from the solution to the Lippmann-Schwinger equation. An expression for the electron photodetachment transition dipole matrix element in terms of Dyson orbitals, which does not make the usual orthogonality assumptions, is derived.

  20. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  1. Quantitative Analysis of the Effect of Iterative Reconstruction Using a Phantom: Determining the Appropriate Blending Percentage

    PubMed Central

    Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2015-01-01

    Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772

  2. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  3. Iteration in Early-Elementary Engineering Design

    ERIC Educational Resources Information Center

    McFarland Kendall, Amber Leigh

    2017-01-01

    K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…

  4. Fokker-Planck-Based Acceleration for SN Equations with Highly Forward Peaked Scattering in Slab Geometry

    NASA Astrophysics Data System (ADS)

    Patel, Japan

    Short mean free paths are characteristic of charged particles. High energy charged particles often have highly forward peaked scattering cross sections. Transport problems involving such charged particles are also highly optically thick. When problems simultaneously have forward peaked scattering and high optical thickness, their solution, using standard iterative methods, becomes very inefficient. In this dissertation, we explore Fokker-Planck-based acceleration for solving such problems.

  5. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  6. Intra-patient comparison of reduced-dose model-based iterative reconstruction with standard-dose adaptive statistical iterative reconstruction in the CT diagnosis and follow-up of urolithiasis.

    PubMed

    Tenant, Sean; Pang, Chun Lap; Dissanayake, Prageeth; Vardhanabhuti, Varut; Stuckey, Colin; Gutteridge, Catherine; Hyde, Christopher; Roobottom, Carl

    2017-10-01

    To evaluate the accuracy of reduced-dose CT scans reconstructed using a new generation of model-based iterative reconstruction (MBIR) in the imaging of urinary tract stone disease, compared with a standard-dose CT using 30% adaptive statistical iterative reconstruction. This single-institution prospective study recruited 125 patients presenting either with acute renal colic or for follow-up of known urinary tract stones. They underwent two immediately consecutive scans, one at standard dose settings and one at the lowest dose (highest noise index) the scanner would allow. The reduced-dose scans were reconstructed using both ASIR 30% and MBIR algorithms and reviewed independently by two radiologists. Objective and subjective image quality measures as well as diagnostic data were obtained. The reduced-dose MBIR scan was 100% concordant with the reference standard for the assessment of ureteric stones. It was extremely accurate at identifying calculi of 3 mm and above. The algorithm allowed a dose reduction of 58% without any loss of scan quality. A reduced-dose CT scan using MBIR is accurate in acute imaging for renal colic symptoms and for urolithiasis follow-up and allows a significant reduction in dose. • MBIR allows reduced CT dose with similar diagnostic accuracy • MBIR outperforms ASIR when used for the reconstruction of reduced-dose scans • MBIR can be used to accurately assess stones 3 mm and above.

  7. Parallelized implicit propagators for the finite-difference Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  8. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  9. Iteration, Not Induction

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2009-01-01

    The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…

  10. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    PubMed Central

    Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Background Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). Purpose To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). Material and Methods This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Results Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). Conclusion In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity. PMID:27110389

  11. Systems of Inhomogeneous Linear Equations

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  12. Determination of structure parameters in strong-field tunneling ionization theory of molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Songfeng; Jin Cheng; College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, Gansu 730070

    2010-03-15

    In the strong field molecular tunneling ionization theory of Tong et al. [Phys. Rev. A 66, 033402 (2002)], the ionization rate depends on the asymptotic wave function of the molecular orbital from which the electron is removed. The orbital wave functions obtained from standard quantum chemistry packages in general are not good enough in the asymptotic region. Here we construct a one-electron model potential for several linear molecules using density functional theory. We show that the asymptotic wave function can be improved with an iteration method and after one iteration accurate asymptotic wave functions and structure parameters are determined. Withmore » the new parameters we examine the alignment-dependent tunneling ionization probabilities for several molecules and compare with other calculations and with recent measurements, including ionization from inner molecular orbitals.« less

  13. On improving linear solver performance: a block variant of GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Dennis, J M; Jessup, E R

    2004-05-10

    The increasing gap between processor performance and memory access time warrants the re-examination of data movement in iterative linear solver algorithms. For this reason, we explore and establish the feasibility of modifying a standard iterative linear solver algorithm in a manner that reduces the movement of data through memory. In particular, we present an alternative to the restarted GMRES algorithm for solving a single right-hand side linear system Ax = b based on solving the block linear system AX = B. Algorithm performance, i.e. time to solution, is improved by using the matrix A in operations on groups of vectors.more » Experimental results demonstrate the importance of implementation choices on data movement as well as the effectiveness of the new method on a variety of problems from different application areas.« less

  14. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194

  15. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  16. Asymptotic (h tending to infinity) absolute stability for BDFs applied to stiff differential equations. [Backward Differentiation Formulas

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Stewart, K.

    1984-01-01

    Methods based on backward differentiation formulas (BDFs) for solving stiff differential equations require iterating to approximate the solution of the corrector equation on each step. One hope for reducing the cost of this is to make do with iteration matrices that are known to have errors and to do no more iterations than are necessary to maintain the stability of the method. This paper, following work by Klopfenstein, examines the effect of errors in the iteration matrix on the stability of the method. Application of the results to an algorithm is discussed briefly.

  17. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  18. A Build-Up Interior Method for Linear Programming: Affine Scaling Form

    DTIC Science & Technology

    1990-02-01

    initiating a major iteration imply convergence in a finite number of iterations. Each iteration t of the Dikin algorithm starts with an interior dual...this variant with the affine scaling method of Dikin [5] (in dual form). We have also looked into the analogous variant for the related Karmarkar’s...4] G. B. Dantzig, Linear Programming and Extensions (Princeton University Press, Princeton, NJ, 1963). [5] I. I. Dikin , "Iterative solution of

  19. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  20. Hydrodynamics of suspensions of passive and active rigid particles: a rigid multiblob approach

    DOE PAGES

    Usabiaga, Florencio Balboa; Kallemov, Bakytzhan; Delmotte, Blaise; ...

    2016-01-12

    We develop a rigid multiblob method for numerically solving the mobility problem for suspensions of passive and active rigid particles of complex shape in Stokes flow in unconfined, partially confined, and fully confined geometries. As in a number of existing methods, we discretize rigid bodies using a collection of minimally resolved spherical blobs constrained to move as a rigid body, to arrive at a potentially large linear system of equations for the unknown Lagrange multipliers and rigid-body motions. Here we develop a block-diagonal preconditioner for this linear system and show that a standard Krylov solver converges in a modest numbermore » of iterations that is essentially independent of the number of particles. Key to the efficiency of the method is a technique for fast computation of the product of the blob-blob mobility matrix and a vector. For unbounded suspensions, we rely on existing analytical expressions for the Rotne-Prager-Yamakawa tensor combined with a fast multipole method (FMM) to obtain linear scaling in the number of particles. For suspensions sedimented against a single no-slip boundary, we use a direct summation on a graphical processing unit (GPU), which gives quadratic asymptotic scaling with the number of particles. For fully confined domains, such as periodic suspensions or suspensions confined in slit and square channels, we extend a recently developed rigid-body immersed boundary method by B. Kallemov, A. P. S. Bhalla, B. E. Griffith, and A. Donev (Commun. Appl. Math. Comput. Sci. 11 (2016), no. 1, 79-141) to suspensions of freely moving passive or active rigid particles at zero Reynolds number. We demonstrate that the iterative solver for the coupled fluid and rigid-body equations converges in a bounded number of iterations regardless of the system size. In our approach, each iteration only requires a few cycles of a geometric multigrid solver for the Poisson equation, and an application of the block-diagonal preconditioner, leading to linear scaling with the number of particles. We optimize a number of parameters in the iterative solvers and apply our method to a variety of benchmark problems to carefully assess the accuracy of the rigid multiblob approach as a function of the resolution. We also model the dynamics of colloidal particles studied in recent experiments, such as passive boomerangs in a slit channel, as well as a pair of non-Brownian active nanorods sedimented against a wall.« less

  1. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  2. Fine-resolution conservation planning with limited climate-change information.

    PubMed

    Shah, Payal; Mallory, Mindy L; Ando, Amy W; Guntenspergen, Glenn R

    2017-04-01

    Climate-change induced uncertainties in future spatial patterns of conservation-related outcomes make it difficult to implement standard conservation-planning paradigms. A recent study translates Markowitz's risk-diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate-change scenarios for carrying out fine-resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk-return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate-change information and full climate-change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate-change forecasts such that the best possible risk-return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate-change information could be reduced by 17% relative to other iterative approaches. © 2016 Society for Conservation Biology.

  3. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  4. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  5. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  6. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  7. Iteration with Spreadsheets.

    ERIC Educational Resources Information Center

    Smith, Michael

    1990-01-01

    Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)

  8. New Standards Put the Spotlight on Professional Learning

    ERIC Educational Resources Information Center

    Mizell, Hayes; Hord, Shirley; Killion, Joellen; Hirsh, Stephanie

    2011-01-01

    Learning Forward introduces new Standards for Professional Learning. This is the third iteration of standards outlining the characteristics of professional learning that lead to effective teaching practices, supportive leadership, and improved student results. The standards are not a prescription for how education leaders and public officials…

  9. Comparison of Estimation Techniques for the Four Parameter Beta Distribution.

    DTIC Science & Technology

    1981-12-01

    Gnanadesikan , Pinkham, and Hughes in 1967 (Ref 11). It dealt with the standard beta, and performed the estimation using smallest order statistics. 22 22 The...convergence of the iterative scheme" (Ref 11:611). Beckman and Tietjen picked up on Gnanadesikan , et al., and developed a solution method which is "fast...zia (Second Edition). Reading, Massachusetts: Addison-Wesley Pub- lishing Company, 1978. 11. Gnanadesikan , R., R. S. Pinkham and L. P. Hughes. "Maximum

  10. Iterative methods for tomography problems: implementation to a cross-well tomography problem

    NASA Astrophysics Data System (ADS)

    Karadeniz, M. F.; Weber, G. W.

    2018-01-01

    The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.

  11. Extending the applicability of the Tkatchenko-Scheffler dispersion correction via iterative Hirshfeld partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bučko, Tomáš, E-mail: bucko@fns.uniba.sk; Department of Computational Materials Physics, Fakultät für Physik and Center for Computational Materials Science, Universität Wien, Sensengasse, Wien 1090; Lebègue, Sébastien, E-mail: sebastien.lebegue@univ-lorraine.fr

    2014-07-21

    Recently we have demonstrated that the applicability of the Tkatchenko-Scheffler (TS) method for calculating dispersion corrections to density-functional theory can be extended to ionic systems if the Hirshfeld method for estimating effective volumes and charges of atoms in molecules or solids (AIM’s) is replaced by its iterative variant [T. Bučko, S. Lebègue, J. Hafner, and J. Ángyán, J. Chem. Theory Comput. 9, 4293 (2013)]. The standard Hirshfeld method uses neutral atoms as a reference, whereas in the iterative Hirshfeld (HI) scheme the fractionally charged atomic reference states are determined self-consistently. We show that the HI method predicts more realistic AIMmore » charges and that the TS/HI approach leads to polarizabilities and C{sub 6} dispersion coefficients in ionic or partially ionic systems which are, as expected, larger for anions than for cations (in contrast to the conventional TS method). For crystalline materials, the new algorithm predicts polarizabilities per unit cell in better agreement with the values derived from the Clausius-Mosotti equation. The applicability of the TS/HI method has been tested for a wide variety of molecular and solid-state systems. It is demonstrated that for systems dominated by covalent interactions and/or dispersion forces the TS/HI method leads to the same results as the conventional TS approach. The difference between the TS/HI and TS approaches increases with increasing ionicity. A detailed comparison is presented for isoelectronic series of octet compounds, layered crystals, complex intermetallic compounds, and hydrides, and for crystals built of molecules or containing molecular anions. It is demonstrated that only the TS/HI method leads to accurate results for systems where both electrostatic and dispersion interactions are important, as illustrated for Li-intercalated graphite and for molecular adsorption on the surfaces in ionic solids and in the cavities of zeolites.« less

  12. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  13. Krylov subspace methods on supercomputers

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1988-01-01

    A short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers is presented. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Polynomial preconditioning as an alternative to standard incomplete factorization techniques is also discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these and other ideas and their effectiveness or potential for different types of architectures is given.

  14. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  15. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selectedmore » from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.« less

  16. A New Newton-Like Iterative Method for Roots of Analytic Functions

    ERIC Educational Resources Information Center

    Otolorin, Olayiwola

    2005-01-01

    A new Newton-like iterative formula for the solution of non-linear equations is proposed. To derive the formula, the convergence criteria of the one-parameter iteration formula, and also the quasilinearization in the derivation of Newton's formula are reviewed. The result is a new formula which eliminates the limitations of other methods. There is…

  17. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  18. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  19. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  20. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  1. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  2. Automatic extraction of the mid-sagittal plane using an ICP variant

    NASA Astrophysics Data System (ADS)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  3. Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.

    PubMed

    Rao, Ying; Wang, Yanghua

    2017-08-17

    In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.

  4. Iterative dip-steering median filter

    NASA Astrophysics Data System (ADS)

    Huo, Shoudong; Zhu, Weihong; Shi, Taikun

    2017-09-01

    Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.

  5. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  6. GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.

    PubMed

    de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica

    2018-05-15

    Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.

  7. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  8. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  9. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  10. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  11. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    NASA Astrophysics Data System (ADS)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  12. TU-F-18A-06: Dual Energy CT Using One Full Scan and a Second Scan with Very Few Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: The conventional dual energy CT (DECT) requires two full CT scans at different energy levels, resulting in dose increase as well as imaging errors from patient motion between the two scans. To shorten the scan time of DECT and thus overcome these drawbacks, we propose a new DECT algorithm using one full scan and a second scan with very few projections by preserving structural information. Methods: We first reconstruct a CT image on the full scan using a standard filtered-backprojection (FBP) algorithm. We then use a compressed sensing (CS) based iterative algorithm on the second scan for reconstruction frommore » very few projections. The edges extracted from the first scan are used as weights in the Objectives: function of the CS-based reconstruction to substantially improve the image quality of CT reconstruction. The basis material images are then obtained by an iterative image-domain decomposition method and an electron density map is finally calculated. The proposed method is evaluated on phantoms. Results: On the Catphan 600 phantom, the CT reconstruction mean error using the proposed method on 20 and 5 projections are 4.76% and 5.02%, respectively. Compared with conventional iterative reconstruction, the proposed edge weighting preserves object structures and achieves a better spatial resolution. With basis materials of Iodine and Teflon, our method on 20 projections obtains similar quality of decomposed material images compared with FBP on a full scan and the mean error of electron density in the selected regions of interest is 0.29%. Conclusion: We propose an effective method for reducing projections and therefore scan time in DECT. We show that a full scan plus a 20-projection scan are sufficient to provide DECT images and electron density with similar quality compared with two full scans. Our future work includes more phantom studies to validate the performance of our method.« less

  13. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  14. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  15. Perturbation-iteration theory for analyzing microwave striplines

    NASA Technical Reports Server (NTRS)

    Kretch, B. E.

    1985-01-01

    A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.

  16. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  17. Material nonlinear analysis via mixed-iterative finite element method

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1992-01-01

    The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.

  18. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  19. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  20. Diffractive elements for generating microscale laser beam patterns: a Y2K problem

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Krueger, Sven; Wernicke, Guenther K.; Ferstl, Margit

    2000-03-01

    Lasers are widely used in industrial fabrication for engraving, cutting and many other purposes. However, material processing at very small scales is still a matter of concern. Advances in diffractive optics could provide for laser systems that could be used for engraving or cutting of micro-scale patterns at high speeds. In our paper we focus on the design of diffractive elements which can be used for this special application. It is a common desire in material processing to apply 'discrete' as well as 'continuous' beam patterns. Especially, the latter case is difficult to handle as typical micro-scale patterns are characterized by bad band-limitation properties, and as speckles can easily occur in beam patterns. It is shown in this paper that a standard iterative design method usually fails to obtain diffractive elements that generate diffraction patterns with acceptable quality. Insights gained from an analysis of the design problems are used to optimize the iterative design method. We demonstrate applicability and success of our approach by the design of diffractive phase elements that generate a discrete and a continuous 'Y2K' pattern.

  1. Complete elliptical ring geometry provides energy and instrument calibration for synchrotron-based two-dimensional X-ray diffraction

    PubMed Central

    Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas

    2013-01-01

    A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840

  2. Identification of multivariable nonlinear systems in the presence of colored noises using iterative hierarchical least squares algorithm.

    PubMed

    Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam

    2014-07-01

    This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Efficient mixing scheme for self-consistent all-electron charge density

    NASA Astrophysics Data System (ADS)

    Shishidou, Tatsuya; Weinert, Michael

    2015-03-01

    In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.

  4. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  5. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  6. A non-iterative extension of the multivariate random effects meta-analysis.

    PubMed

    Makambi, Kepher H; Seung, Hyunuk

    2015-01-01

    Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.

  7. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  8. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  9. Validation and application of auxiliary density perturbation theory and non-iterative approximation to coupled-perturbed Kohn-Sham approach for calculation of dipole-quadrupole polarizability

    NASA Astrophysics Data System (ADS)

    Shedge, Sapana V.; Pal, Sourav; Köster, Andreas M.

    2011-07-01

    Recently, two non-iterative approaches have been proposed to calculate response properties within density functional theory (DFT). These approaches are auxiliary density perturbation theory (ADPT) and the non-iterative approach to the coupled-perturbed Kohn-Sham (NIA-CPKS) method. Though both methods are non-iterative, they use different techniques to obtain the perturbed Kohn-Sham matrix. In this Letter, for the first time, both of these two independent methods have been used for the calculation of dipole-quadrupole polarizabilities. To validate these methods, three tetrahedral molecules viz., P4,CH4 and adamantane (C10H16) have been used as examples. The comparison with MP2 and CCSD proves the reliability of the methodology.

  10. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  11. Iterative methods for elliptic finite element equations on general meshes

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.; Choudhury, Shenaz

    1986-01-01

    Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.

  12. SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    2015-06-15

    Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  13. Agile Task Tracking Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Roger T.; Crump, Thomas Vu

    The work was created to provide a tool for the purpose of improving the management of tasks associated with Agile projects. Agile projects are typically completed in an iterative manner with many short duration tasks being performed as part of iterations. These iterations are generally referred to as sprints. The objective of this work is to create a single tool that enables sprint teams to manage all of their tasks in multiple sprints and automatically produce all standard sprint performance charts with minimum effort. The format of the printed work is designed to mimic a standard Kanban board. The workmore » is developed as a single Excel file with worksheets capable of managing up to five concurrent sprints and up to one hundred tasks. It also includes a summary worksheet providing performance information from all active sprints. There are many commercial project management systems typically designed with features desired by larger organizations with many resources managing multiple programs and projects. The audience for this work is the small organizations and Agile project teams desiring an inexpensive, simple, user-friendly, task management tool. This work uses standard readily available software, Excel, requiring minimum data entry and automatically creating summary charts and performance data. It is formatted to print out and resemble standard flip charts and provide the visuals associated with this type of work.« less

  14. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  15. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    PubMed

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  16. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  17. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  19. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery.

    PubMed

    Hashemi, SayedMasoud; Song, William Y; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G; Ruschin, Mark

    2017-04-07

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm -1 which was increased to 1.2 mm -1 by SDIR, at half maximum.

  20. Simultaneous deblurring and iterative reconstruction of CBCT for image guided brain radiosurgery

    NASA Astrophysics Data System (ADS)

    Hashemi, SayedMasoud; Song, William Y.; Sahgal, Arjun; Lee, Young; Huynh, Christopher; Grouza, Vladimir; Nordström, Håkan; Eriksson, Markus; Dorenlot, Antoine; Régis, Jean Marie; Mainprize, James G.; Ruschin, Mark

    2017-04-01

    One of the limiting factors in cone-beam CT (CBCT) image quality is system blur, caused by detector response, x-ray source focal spot size, azimuthal blurring, and reconstruction algorithm. In this work, we develop a novel iterative reconstruction algorithm that improves spatial resolution by explicitly accounting for image unsharpness caused by different factors in the reconstruction formulation. While the model-based iterative reconstruction techniques use prior information about the detector response and x-ray source, our proposed technique uses a simple measurable blurring model. In our reconstruction algorithm, denoted as simultaneous deblurring and iterative reconstruction (SDIR), the blur kernel can be estimated using the modulation transfer function (MTF) slice of the CatPhan phantom or any other MTF phantom, such as wire phantoms. The proposed image reconstruction formulation includes two regularization terms: (1) total variation (TV) and (2) nonlocal regularization, solved with a split Bregman augmented Lagrangian iterative method. The SDIR formulation preserves edges, eases the parameter adjustments to achieve both high spatial resolution and low noise variances, and reduces the staircase effect caused by regular TV-penalized iterative algorithms. The proposed algorithm is optimized for a point-of-care head CBCT unit for image-guided radiosurgery and is tested with CatPhan phantom, an anthropomorphic head phantom, and 6 clinical brain stereotactic radiosurgery cases. Our experiments indicate that SDIR outperforms the conventional filtered back projection and TV penalized simultaneous algebraic reconstruction technique methods (represented by adaptive steepest-descent POCS algorithm, ASD-POCS) in terms of MTF and line pair resolution, and retains the favorable properties of the standard TV-based iterative reconstruction algorithms in improving the contrast and reducing the reconstruction artifacts. It improves the visibility of the high contrast details in bony areas and the brain soft-tissue. For example, the results show the ventricles and some brain folds become visible in SDIR reconstructed images and the contrast of the visible lesions is effectively improved. The line-pair resolution was improved from 12 line-pair/cm in FBP to 14 line-pair/cm in SDIR. Adjusting the parameters of the ASD-POCS to achieve 14 line-pair/cm caused the noise variance to be higher than the SDIR. Using these parameters for ASD-POCS, the MTF of FBP and ASD-POCS were very close and equal to 0.7 mm-1 which was increased to 1.2 mm-1 by SDIR, at half maximum.

  1. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  2. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  3. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  4. Application of a spectrum standardization method for carbon analysis in coal using laser-induced breakdown spectroscopy (LIBS).

    PubMed

    Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou

    2014-01-01

    Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.

  5. Development of a practice-based research program.

    PubMed

    Hawk, C; Long, C R; Boulanger, K

    1998-01-01

    To establish an infrastructure to collect accurate data from ambulatory settings. The program was developed through an iterative model governed by a process of formative evaluation. The three iterations were a needs assessment, feasibility study and pilot project. Necessary program components were identified as infrastructure, practitioner-researcher partnership, centralized data management and standardized quality assurance measures. Volunteer chiropractors and their staff collected data on patients in their practices in ambulatory settings in the U.S. and Canada. Evaluative measures were counts of participants, patients and completed forms. Standardized, validated and reliable measures collected by patient self-report were used to assess treatment outcomes. These included the SF-36 or SF-12 Health Survey, the Pain Disability Index, and the Global Well-Being Scale. For characteristics for which appropriate standardized instruments were not available, questionnaires were designed and and pilot-tested before use. Information was gathered on practice and patient characteristics and treatment outcomes, but for this report, only those data concerning process evaluation are reported. Through the three program iterations, 65 DCs collected data on 1360 patients, 663 of whom were new patients. Follow-up data recorded by doctors were obtained for more than 70% of patients; a maximum of 50% of patient-completed follow-up forms were collected in the three iterations. This program is capable of providing data for descriptive epidemiology of ambulatory patients, and, with continued effort to maximize follow-up, may have utility in providing insight into utilization patterns and patient outcomes.

  6. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  7. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  8. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  9. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  10. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  11. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  12. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  13. Supporting Mathematics Instruction through Community

    ERIC Educational Resources Information Center

    Amidon, Joel C.; Trevathan, Morgan L.

    2016-01-01

    Raising expectations is nothing new. Every iteration of standards elevates the expectations for what students should know and be able to do. The Common Core State Standards for Mathematics (CCSSM) is no exception, with standards for content and practice that move beyond memorization of traditional algorithms to "make sense of problems and…

  14. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  15. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  16. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  17. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  18. State-of-the-art Anonymization of Medical Records Using an Iterative Machine Learning Framework

    PubMed Central

    Szarvas, György; Farkas, Richárd; Busa-Fekete, Róbert

    2007-01-01

    Objective The anonymization of medical records is of great importance in the human life sciences because a de-identified text can be made publicly available for non-hospital researchers as well, to facilitate research on human diseases. Here the authors have developed a de-identification model that can successfully remove personal health information (PHI) from discharge records to make them conform to the guidelines of the Health Information Portability and Accountability Act. Design We introduce here a novel, machine learning-based iterative Named Entity Recognition approach intended for use on semi-structured documents like discharge records. Our method identifies PHI in several steps. First, it labels all entities whose tags can be inferred from the structure of the text and it then utilizes this information to find further PHI phrases in the flow text parts of the document. Measurements Following the standard evaluation method of the first Workshop on Challenges in Natural Language Processing for Clinical Data, we used token-level Precision, Recall and Fβ=1 measure metrics for evaluation. Results Our system achieved outstanding accuracy on the standard evaluation dataset of the de-identification challenge, with an F measure of 99.7534% for the best submitted model. Conclusion We can say that our system is competitive with the current state-of-the-art solutions, while we describe here several techniques that can be beneficial in other tasks that need to handle structured documents such as clinical records. PMID:17823086

  19. Cylinder Expansion Experiments and Measured Product Isentropes for XTX-8004 Explosive

    NASA Astrophysics Data System (ADS)

    Jackson, Scott

    2015-06-01

    We present cylinder expansion data from full-scale (25.4-mm inner diameter) and half-scale (12.7-mm inner diameter) experiments with XTX-8004 explosive, composed of 80% RDX explosive and 20% Sylgard 182 silicone elastomer. An analytic method is reviewed and used to recover detonation product isentropes from the experimental data, which are presented in the standard JWL form. The cylinder expansion data was found to scale well, indicating ideal detonation behavior across the test scales. The analytically determined product JWLs were found to agree well with those produced via iterative hydrocode methods, but required significantly less computational effort.

  20. STANDARDIZING THE STRUCTURE OF STROKE CLINICAL AND EPIDEMIOLOGIC RESEARCH DATA: THE NINDS STROKE COMMON DATA ELEMENT (CDE) PROJECT

    PubMed Central

    Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee

    2012-01-01

    Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239

  1. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  2. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  3. Development of the Nuclear-Electronic Orbital Approach and Applications to Ionic Liquids and Tunneling Processes

    DTIC Science & Technology

    2010-02-24

    electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation

  4. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  5. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  6. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  7. Use of direct and iterative solvers for estimation of SNP effects in genome-wide selection

    PubMed Central

    2010-01-01

    The aim of this study was to compare iterative and direct solvers for estimation of marker effects in genomic selection. One iterative and two direct methods were used: Gauss-Seidel with Residual Update, Cholesky Decomposition and Gentleman-Givens rotations. For resembling different scenarios with respect to number of markers and of genotyped animals, a simulated data set divided into 25 subsets was used. Number of markers ranged from 1,200 to 5,925 and number of animals ranged from 1,200 to 5,865. Methods were also applied to real data comprising 3081 individuals genotyped for 45181 SNPs. Results from simulated data showed that the iterative solver was substantially faster than direct methods for larger numbers of markers. Use of a direct solver may allow for computing (co)variances of SNP effects. When applied to real data, performance of the iterative method varied substantially, depending on the level of ill-conditioning of the coefficient matrix. From results with real data, Gentleman-Givens rotations would be the method of choice in this particular application as it provided an exact solution within a fairly reasonable time frame (less than two hours). It would indeed be the preferred method whenever computer resources allow its use. PMID:21637627

  8. The finite element model for the propagation of light in scattering media: a direct method for domains with nonscattering regions.

    PubMed

    Arridge, S R; Dehghani, H; Schweiger, M; Okada, E

    2000-01-01

    We present a method for handling nonscattering regions within diffusing domains. The method develops from an iterative radiosity-diffusion approach using Green's functions that was computationally slow. Here we present an improved implementation using a finite element method (FEM) that is direct. The fundamental idea is to introduce extra equations into the standard diffusion FEM to represent nondiffusive light propagation across a nonscattering region. By appropriate mesh node ordering the computational time is not much greater than for diffusion alone. We compare results from this method with those from a discrete ordinate transport code, and with Monte Carlo calculations. The agreement is very good, and, in addition, our scheme allows us to easily model time-dependent and frequency domain problems.

  9. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  10. LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging

    NASA Astrophysics Data System (ADS)

    De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace

    2006-03-01

    In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, L; Han, Y; Jin, M

    Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater thanmore » an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.« less

  12. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  13. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  14. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  15. Application of the perturbation iteration method to boundary layer type problems.

    PubMed

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  16. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  17. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  18. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  19. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography

    PubMed Central

    Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A

    2012-01-01

    Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062

  20. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  1. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  2. On a new iterative method for solving linear systems and comparison results

    NASA Astrophysics Data System (ADS)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  3. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  4. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Development of New Methods for the Solution of Nonlinear Differential Equations by the Method of Lie Series and Extension to New Fields.

    DTIC Science & Technology

    perturbation formulas of Groebner (1960) and Alexseev (1961) for the solution of ordinary differential equations. These formulas are generalized and...iteration methods are given, which include the Methods of Picard, Groebner -Knapp, Poincare, Chen, as special cases. Chapter 3 generalizes an iterated

  6. Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics

    NASA Technical Reports Server (NTRS)

    Seginer, A.; Rusak, Z.; Wasserstrom, E.

    1983-01-01

    Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.

  7. Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport

    NASA Astrophysics Data System (ADS)

    Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.

    2017-10-01

    UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.

  8. The Impact of Different Levels of Adaptive Iterative Dose Reduction 3D on Image Quality of 320-Row Coronary CT Angiography: A Clinical Trial

    PubMed Central

    Feger, Sarah; Rief, Matthias; Zimmermann, Elke; Martus, Peter; Schuijf, Joanne Désirée; Blobel, Jörg; Richter, Felicitas; Dewey, Marc

    2015-01-01

    Purpose The aim of this study was the systematic image quality evaluation of coronary CT angiography (CTA), reconstructed with the 3 different levels of adaptive iterative dose reduction (AIDR 3D) and compared to filtered back projection (FBP) with quantum denoising software (QDS). Methods Standard-dose CTA raw data of 30 patients with mean radiation dose of 3.2 ± 2.6 mSv were reconstructed using AIDR 3D mild, standard, strong and compared to FBP/QDS. Objective image quality comparison (signal, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), contour sharpness) was performed using 21 measurement points per patient, including measurements in each coronary artery from proximal to distal. Results Objective image quality parameters improved with increasing levels of AIDR 3D. Noise was lowest in AIDR 3D strong (p≤0.001 at 20/21 measurement points; compared with FBP/QDS). Signal and contour sharpness analysis showed no significant difference between the reconstruction algorithms for most measurement points. Best coronary SNR and CNR were achieved with AIDR 3D strong. No loss of SNR or CNR in distal segments was seen with AIDR 3D as compared to FBP. Conclusions On standard-dose coronary CTA images, AIDR 3D strong showed higher objective image quality than FBP/QDS without reducing contour sharpness. Trial Registration Clinicaltrials.gov NCT00967876 PMID:25945924

  9. Effect of low-dose CT and iterative reconstruction on trabecular bone microstructure assessment

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Baum, Thomas; Nasirudin, Radin A.; Mei, Kai; Garcia, Eduardo G.; Burgkart, Rainer; Rummeny, Ernst J.; Bauer, Jan S.; Noël, Peter B.

    2016-03-01

    The trabecular bone microstructure is an important factor in the development of osteoporosis. It is well known that its deterioration is one effect when osteoporosis occurs. Previous research showed that the analysis of trabecular bone microstructure enables more precise diagnoses of osteoporosis compared to a sole measurement of the mineral density. Microstructure parameters are assessed on volumetric images of the bone acquired either with high-resolution magnetic resonance imaging, high-resolution peripheral quantitative computed tomography or high-resolution computed tomography (CT), with only CT being applicable to the spine, which is one of clinically most relevant fracture sites. However, due to the high radiation exposure for imaging the whole spine these measurements are not applicable in current clinical routine. In this work, twelve vertebrae from three different donors were scanned with standard and low radiation dose. Trabecular bone microstructure parameters were assessed for CT images reconstructed with statistical iterative reconstruction (SIR) and analytical filtered backprojection (FBP). The resulting structure parameters were correlated to the biomechanically determined fracture load of each vertebra. Microstructure parameters assessed for low-dose data reconstructed with SIR significantly correlated with fracture loads as well as parameters assessed for standard-dose data reconstructed with FBP. Ideal results were achieved with low to zero regularization strength yielding microstructure parameters not significantly different from those assessed for standard-dose FPB data. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  10. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  11. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  12. Dynamic compression of copper to over 450 GPa: A high-pressure standard

    DOE PAGES

    Kraus, R. G.; Davis, J. -P.; Seagle, C. T.; ...

    2016-04-12

    We obtained an absolute stress-density path for shocklessly compressed copper to over 450 GPa. A magnetic pressure drive is temporally tailored to generate shockless compression waves through over 2.5-mm-thick copper samples. Furthermore, the free-surface velocity data is analyzed for Lagrangian sound velocity using the iterative Lagrangian analysis (ILA) technique, which relies upon the method of characteristics. We correct for the effects of strength and plastic work heating to determine an isentropic compression path. By assuming a Debye model for the heat capacity, we can further correct the isentrope to an isotherm. Finally, our determination of the isentrope and isotherm ofmore » copper represents a highly accurate pressure standard for copper to over 450 GPa.« less

  13. Joint recognition and discrimination in nonlinear feature space

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-09-01

    A new general method for linear and nonlinear feature extraction is presented. It is novel since it provides both representation and discrimination while most other methods are concerned with only one of these issues. We call this approach the maximum representation and discrimination feature (MRDF) method and show that the Bayes classifier and the Karhunen- Loeve transform are special cases of it. We refer to our nonlinear feature extraction technique as nonlinear eigen- feature extraction. It is new since it has a closed-form solution and produces nonlinear decision surfaces with higher rank than do iterative methods. Results on synthetic databases are shown and compared with results from standard Fukunaga- Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem (discrimination) and to the classification and pose estimation of two similar objects (representation and discrimination).

  14. The preconditioned Gauss-Seidel method faster than the SOR method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Morimoto, Munenori

    2008-09-01

    In recent years, a number of preconditioners have been applied to linear systems [A.D. Gunawardena, S.K. Jain, L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154-156 (1991) 123-143; T. Kohno, H. Kotakemori, H. Niki, M. Usui, Improving modified Gauss-Seidel method for Z-matrices, Linear Algebra Appl. 267 (1997) 113-123; H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), J. Comput. Appl. Math. 145 (2002) 373-378; H. Kotakemori, H. Niki, N. Okamoto, Accelerated iteration method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97; M. Usui, H. Niki, T.Kohno, Adaptive Gauss-Seidel method for linear systems, Internat. J. Comput. Math. 51(1994)119-125 [10

  15. Quantitative phase retrieval with arbitrary pupil and illumination

    DOE PAGES

    Claus, Rene A.; Naulleau, Patrick P.; Neureuther, Andrew R.; ...

    2015-10-02

    We present a general algorithm for combining measurements taken under various illumination and imaging conditions to quantitatively extract the amplitude and phase of an object wave. The algorithm uses the weak object transfer function, which incorporates arbitrary pupil functions and partially coherent illumination. The approach is extended beyond the weak object regime using an iterative algorithm. Finally, we demonstrate the method on measurements of Extreme Ultraviolet Lithography (EUV) multilayer mask defects taken in an EUV zone plate microscope with both a standard zone plate lens and a zone plate implementing Zernike phase contrast.

  16. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  17. SU-E-I-04: Improving CT Quality for Radiation Therapy of Patients with High Body Mass Index Using Iterative Reconstruction Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noid, G; Tai, A; Li, X

    2015-06-15

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. The CT IQ for patients with a high Body Mass Index (BMI) can suffer from increased noise due to photon starvation. The purpose of this study is to investigate and to quantify the IQ enhancement for high BMI patients through the application of IR algorithms. Methods: CT raw data collected for 6 radiotherapy (RT) patients with BMI, greater than or equal to 30 were retrospectively analyzed. All CT data were acquired using a CT scanner (Somaton Definition ASmore » Open, Siemens) installed in a linac room (CT-on-rails) using standard imaging protocols. The CT data were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared and correlated with patient depth and BMI. The patient depth was defined as the largest distance from anterior to posterior along the bilateral symmetry axis. Results: IR techniques are demonstrated to preserve contrast and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is roughly doubled by adopting the highest SAFIRE strength. A significant correlation was observed between patient depth and IR noise reduction through Pearson’s correlation test (R = 0.9429/P = 0.0167). The mean patient depth was 30.4 cm and the average relative noise reduction for the strongest iterative reconstruction was 55%. Conclusion: The IR techniques produce a measureable enhancement to CT IQ by reducing the noise. Dramatic noise reduction is evident for the high BMI patients. The improved CT IQ enables more accurate delineation of tumors and organs at risk and more accuarte dose calculations for RT planning and delivery guidance. Supported by Siemens.« less

  18. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  19. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  20. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  1. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  2. Sometimes "Newton's Method" Always "Cycles"

    ERIC Educational Resources Information Center

    Latulippe, Joe; Switkes, Jennifer

    2012-01-01

    Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…

  3. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  4. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  5. Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses

    NASA Astrophysics Data System (ADS)

    Novak, Roman; Vencelj, Matja¿

    2009-12-01

    The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.

  6. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  7. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  8. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  9. Finding the Optimal Guidance for Enhancing Anchored Instruction

    ERIC Educational Resources Information Center

    Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.

    2014-01-01

    This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…

  10. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  11. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  12. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  13. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  14. Block Gauss elimination followed by a classical iterative method for the solution of linear systems

    NASA Astrophysics Data System (ADS)

    Alanelli, Maria; Hadjidimos, Apostolos

    2004-02-01

    In the last two decades many papers have appeared in which the application of an iterative method for the solution of a linear system is preceded by a step of the Gauss elimination process in the hope that this will increase the rates of convergence of the iterative method. This combination of methods has been proven successful especially when the matrix A of the system is an M-matrix. The purpose of this paper is to extend the idea of one to more Gauss elimination steps, consider other classes of matrices A, e.g., p-cyclic consistently ordered, and generalize and improve the asymptotic convergence rates of some of the methods known so far.

  15. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  16. [Study of building quantitative analysis model for chlorophyll in winter wheat with reflective spectrum using MSC-ANN algorithm].

    PubMed

    Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui

    2010-01-01

    Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.

  17. Modified conjugate gradient method for diagonalizing large matrices.

    PubMed

    Jie, Quanlin; Liu, Dunhuan

    2003-11-01

    We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.

  18. WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Niu, T

    2015-06-15

    Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  19. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  20. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  1. Sparse magnetic resonance imaging reconstruction using the bregman iteration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo

    2013-01-01

    Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.

  2. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  3. A tutorial description of an interior point method and its applications to security-constrained economic dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vargas, L.S.; Quintana, V.H.; Vannelli, A.

    This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less

  4. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula

    2018-07-01

    Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.

  5. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  6. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  7. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  8. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  9. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  10. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  11. Status of the ITER Cryodistribution

    NASA Astrophysics Data System (ADS)

    Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.

    2017-12-01

    Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.

  12. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    NASA Astrophysics Data System (ADS)

    Mazon, D.; Liegeard, C.; Jardin, A.; Barnsley, R.; Walsh, M.; O'Mullane, M.; Sirinelli, A.; Dorchies, F.

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  13. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application.

    PubMed

    Mazon, D; Liegeard, C; Jardin, A; Barnsley, R; Walsh, M; O'Mullane, M; Sirinelli, A; Dorchies, F

    2016-11-01

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plug geometry.

  14. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  15. Computational trigonometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, K.

    1994-12-31

    By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

  16. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  17. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  18. Uniscale multi-view registration using double dog-leg method

    NASA Astrophysics Data System (ADS)

    Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan

    2009-02-01

    3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.

  19. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  20. Digital evaluation of absolute marginal discrepancy: A comparison of ceramic crowns fabricated with conventional and digital techniques.

    PubMed

    Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui

    2018-04-05

    Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  1. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  2. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  3. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  4. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  5. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  6. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  7. X-Divertors on ITER - with no hardware changes

    NASA Astrophysics Data System (ADS)

    Valanju, Prashant; Covele, Brent; Kotschenreuther, Mike; Mahajan, Swadesh; Kessel, Charles

    2014-10-01

    Using CORSICA, we have discovered that X-Divertor (XD) equilibria are possible on ITER - without any extra PF coils inside the TF coils, and with no changes to ITER's poloidal field (PF) coil set, divertor cassette, strike points, or first wall. Starting from the Standard Divertor (SD), a sequence of XD configurations (with increasing flux expansions at the divertor plate) can be made by reprogramming ITER PF coil currents while keeping them all under their design limits (Lackner and Zohm have shown this to be impossible for Snowflakes). The strike point is held fixed, so no changes in the divertor or pumping hardware will be needed. The main plasma shape is kept very close to the SD case, so no hardware changes to the main chamber will be needed. Time-dependent ITER-XD operational scenarios are being checked using TSC. This opens the possibility that many XDs could be tested and used to assist in high-power operation on ITER. Because of the toroidally segmented ITER divertor plates, strongly detached operation may be critical for making use of the largest XD flux expansion possible. The flux flaring in XDs is expected to increase the stability of detachment, so that H-mode confinement is not affected. Detachment stability is being examined with SOLPS. This work supported by US DOE Grants DE-FG02-04ER54742 and DE-FG02-04ER54754 and by TACC at UT Austin.

  8. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  9. Anderson acceleration and application to the three-temperature energy equations

    NASA Astrophysics Data System (ADS)

    An, Hengbin; Jia, Xiaowei; Walker, Homer F.

    2017-10-01

    The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.

  10. Iterative and multigrid methods in the finite element solution of incompressible and turbulent fluid flow

    NASA Astrophysics Data System (ADS)

    Lavery, N.; Taylor, C.

    1999-07-01

    Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright

  11. Engineering Design of ITER Prototype Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.

    2011-08-01

    The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.

  12. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  13. Newton-Krylov-Schwarz: An implicit solver for CFD

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.

    1995-01-01

    Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.

  14. Solution of a few nonlinear problems in aerodynamics by the finite elements and functional least squares methods. Ph.D. Thesis - Paris Univ.; [mathematical models of transonic flow using nonlinear equations

    NASA Technical Reports Server (NTRS)

    Periaux, J.

    1979-01-01

    The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.

  15. Reinforcement learning produces dominant strategies for the Iterated Prisoner's Dilemma.

    PubMed

    Harper, Marc; Knight, Vincent; Jones, Martin; Koutsovoulos, Georgios; Glynatsi, Nikoleta E; Campbell, Owen

    2017-01-01

    We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.

  16. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  17. JRTF: A Flexible Software Framework for Real-Time Control in Magnetic Confinement Nuclear Fusion Experiments

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.

    2016-04-01

    The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  19. A study on Marangoni convection by the variational iteration method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Onur; Oturanç, Galip

    2012-09-01

    In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.

  20. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  1. Performance Analysis of Entropy Methods on K Means in Clustering Process

    NASA Astrophysics Data System (ADS)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  2. Building Better Courses: Examining the Content Validity of the iNACOL National Standards for Quality Online Courses

    ERIC Educational Resources Information Center

    Adelstein, David; Barbour, Michael

    2016-01-01

    In 2011, the International Association for K-12 Online Learning released the second iteration of the "National Standards for Quality Online Courses." These standards have been used by numerous institutions and states around the country to help design and create K-12 online courses. However, there has been no reported research on the…

  3. GoldenBraid: An Iterative Cloning System for Standardized Assembly of Reusable Genetic Modules

    PubMed Central

    Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I.; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego

    2011-01-01

    Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop (“braid”) topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described. PMID:21750718

  4. GoldenBraid: an iterative cloning system for standardized assembly of reusable genetic modules.

    PubMed

    Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego

    2011-01-01

    Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop ("braid") topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described.

  5. Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities

    DTIC Science & Technology

    2009-02-01

    levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making

  6. An iterative method for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Goldstein, C. I.; Turkel, E.

    1983-01-01

    An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.

  7. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  8. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  9. Global transport in a nonautonomous periodic standard map

    DOE PAGES

    Calleja, Renato C.; del-Castillo-Negrete, D.; Martinez-del-Rio, D.; ...

    2017-04-14

    A non-autonomous version of the standard map with a periodic variation of the perturbation parameter is introduced and studied via an autonomous map obtained from the iteration of the nonautonomous map over a period. Symmetry properties in the variables and parameters of the map are found and used to find relations between rotation numbers of invariant sets. The role of the nonautonomous dynamics on period-one orbits, stability and bifurcation is studied. The critical boundaries for the global transport and for the destruction of invariant circles with fixed rotation number are studied in detail using direct computation and a continuation method.more » In the case of global transport, the critical boundary has a particular symmetrical horn shape. Here, the results are contrasted with similar calculations found in the literature.« less

  10. Global transport in a nonautonomous periodic standard map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calleja, Renato C.; del-Castillo-Negrete, D.; Martinez-del-Rio, D.

    A non-autonomous version of the standard map with a periodic variation of the perturbation parameter is introduced and studied via an autonomous map obtained from the iteration of the nonautonomous map over a period. Symmetry properties in the variables and parameters of the map are found and used to find relations between rotation numbers of invariant sets. The role of the nonautonomous dynamics on period-one orbits, stability and bifurcation is studied. The critical boundaries for the global transport and for the destruction of invariant circles with fixed rotation number are studied in detail using direct computation and a continuation method.more » In the case of global transport, the critical boundary has a particular symmetrical horn shape. Here, the results are contrasted with similar calculations found in the literature.« less

  11. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  12. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  13. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  14. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  15. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  16. Non-iterative double-frame 2D/3D particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.

    2017-09-01

    In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.

  17. Development of an efficient multigrid method for the NEM form of the multigroup neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Al-Chalabi, Rifat M. Khalil

    1997-09-01

    Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)

  18. Anderson Acceleration for Fixed-Point Iterations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Homer F.

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  19. Fast implementations of reconstruction-based scatter compensation in fully 3D SPECT image reconstruction

    NASA Astrophysics Data System (ADS)

    Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.

    1998-04-01

    Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with tracer, and also using experimentally acquired data with tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for image reconstruction).

  20. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams.

    PubMed

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  1. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  2. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  3. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    PubMed

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  4. C–IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    DOE PAGES

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...

    2016-05-03

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  5. Multiple solution of linear algebraic systems by an iterative method with recomputed preconditioner in the analysis of microstrip structures

    NASA Astrophysics Data System (ADS)

    Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.

    2016-06-01

    A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.

  6. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  7. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  8. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  9. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  10. Aztec user`s guide. Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1995-10-01

    Aztec is an iterative library that greatly simplifies the parallelization process when solving the linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. Aztec is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparsemore » unstructured matrices for parallel solution. Once the distributed matrix is created, computation can be performed on any of the parallel machines running Aztec: nCUBE 2, IBM SP2 and Intel Paragon, MPI platforms as well as standard serial and vector platforms. Aztec includes a number of Krylov iterative methods such as conjugate gradient (CG), generalized minimum residual (GMRES) and stabilized biconjugate gradient (BICGSTAB) to solve systems of equations. These Krylov methods are used in conjunction with various preconditioners such as polynomial or domain decomposition methods using LU or incomplete LU factorizations within subdomains. Although the matrix A can be general, the package has been designed for matrices arising from the approximation of partial differential equations (PDEs). In particular, the Aztec package is oriented toward systems arising from PDE applications.« less

  11. Analyzing the Effect of Multi-fuel and Practical Constraints on Realistic Economic Load Dispatch using Novel Two-stage PSO

    NASA Astrophysics Data System (ADS)

    Chintalapudi, V. S.; Sirigiri, Sivanagaraju

    2017-04-01

    In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.

  12. Empirical OPC rule inference for rapid RET application

    NASA Astrophysics Data System (ADS)

    Kulkarni, Anand P.

    2006-10-01

    A given technological node (45 nm, 65 nm) can be expected to process thousands of individual designs. Iterative methods applied at the node consume valuable days in determining proper placement of OPC features, and manufacturing and testing mask correspondence to wafer patterns in a trial-and-error fashion for each design. Repeating this fabrication process for each individual design is a time-consuming and expensive process. We present a novel technique which sidesteps the requirement to iterate through the model-based OPC analysis and pattern verification cycle on subsequent designs at the same node. Our approach relies on the inference of rules from a correct pattern at the wafer surface it relates to the OPC and pre-OPC pattern layout files. We begin with an offline phase where we obtain a "gold standard" design file that has been fab-tested at the node with a prepared, post-OPC layout file that corresponds to the intended on-wafer pattern. We then run an offline analysis to infer rules to be used in this method. During the analysis, our method implicitly identifies contextual OPC strategies for optimal placement of RET features on any design at that node. Using these strategies, we can apply OPC to subsequent designs at the same node with accuracy comparable to the original design file but significantly smaller expected runtimes. The technique promises to offer a rapid and accurate complement to existing RET application strategies.

  13. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  14. Bulk ion heating with ICRF waves in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantsinen, M. J., E-mail: mervi.mantsinen@bsc.es; Barcelona Supercomputing Center, Barcelona; Bilato, R.

    2015-12-10

    Heating with ICRF waves is a well-established method on present-day tokamaks and one of the heating systems foreseen for ITER. However, further work is still needed to test and optimize its performance in fusion devices with metallic high-Z plasma facing components (PFCs) in preparation of ITER and DEMO operation. This is of particular importance for the bulk ion heating capabilities of ICRF waves. Efficient bulk ion heating with the standard ITER ICRF scheme, i.e. the second harmonic heating of tritium with or without {sup 3}He minority, was demonstrated in experiments carried out in deuterium-tritium plasmas on JET and TFTR andmore » is confirmed by ICRF modelling. This paper focuses on recent experiments with {sup 3}He minority heating for bulk ion heating on the ASDEX Upgrade (AUG) tokamak with ITER-relevant all-tungsten PFCs. An increase of 80% in the central ion temperature T{sub i} from 3 to 5.5 keV was achieved when 3 MW of ICRF power tuned to the central {sup 3}He ion cyclotron resonance was added to 4.5 MW of deuterium NBI. The radial gradient of the T{sub i} profile reached locally values up to about 50 keV/m and the normalized logarithmic ion temperature gradients R/LT{sub i} of about 20, which are unusually large for AUG plasmas. The large changes in the T{sub i} profiles were accompanied by significant changes in measured plasma toroidal rotation, plasma impurity profiles and MHD activity, which indicate concomitant changes in plasma properties with the application of ICRF waves. When the {sup 3}He concentration was increased above the optimum range for bulk ion heating, a weaker peaking of the ion temperature profile was observed, in line with theoretical expectations.« less

  15. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Polycapillary lenses for soft x-ray transmission in ITER: Model, comparison with experiments, and potential application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, D., E-mail: Didier.Mazon@cea.fr; Jardin, A.; Liegeard, C.

    2016-11-15

    Measuring Soft X-Ray (SXR) radiation [0.1 keV; 15 keV] in tokamaks is a standard way of extracting valuable information on the particle transport and magnetohydrodynamic activity. Generally, the analysis is performed with detectors positioned close to the plasma for a direct line of sight. A burning plasma, like the ITER deuterium-tritium phase, is too harsh an environment to permit the use of such detectors in close vicinity of the machine. We have thus investigated in this article the possibility of using polycapillary lenses in ITER to transport the SXR information several meters away from the plasma in the complex port-plugmore » geometry.« less

  17. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  18. Discrete fourier transform (DFT) analysis for applications using iterative transform methods

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2012-01-01

    According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.

  19. Iterative Methods for Solving Nonlinear Parabolic Problem in Pension Saving Management

    NASA Astrophysics Data System (ADS)

    Koleva, M. N.

    2011-11-01

    In this work we consider a nonlinear parabolic equation, obtained from Riccati like transformation of the Hamilton-Jacobi-Bellman equation, arising in pension saving management. We discuss two numerical iterative methods for solving the model problem—fully implicit Picard method and mixed Picard-Newton method, which preserves the parabolic characteristics of the differential problem. Numerical experiments for comparison the accuracy and effectiveness of the algorithms are discussed. Finally, observations are given.

  20. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  1. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  2. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  3. A non-iterative twin image elimination method with two in-line digital holograms

    NASA Astrophysics Data System (ADS)

    Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young

    2018-02-01

    We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.

  4. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  5. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  6. Reinforcement learning produces dominant strategies for the Iterated Prisoner’s Dilemma

    PubMed Central

    Jones, Martin; Koutsovoulos, Georgios; Glynatsi, Nikoleta E.; Campbell, Owen

    2017-01-01

    We present tournament results and several powerful strategies for the Iterated Prisoner’s Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also. PMID:29228001

  7. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  8. A morphologically preserved multi-resolution TIN surface modeling and visualization method for virtual globes

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2017-07-01

    Virtual globes play an important role in representing three-dimensional models of the Earth. To extend the functioning of a virtual globe beyond that of a "geobrowser", the accuracy of the geospatial data in the processing and representation should be of special concern for the scientific analysis and evaluation. In this study, we propose a method for the processing of large-scale terrain data for virtual globe visualization and analysis. The proposed method aims to construct a morphologically preserved multi-resolution triangulated irregular network (TIN) pyramid for virtual globes to accurately represent the landscape surface and simultaneously satisfy the demands of applications at different scales. By introducing cartographic principles, the TIN model in each layer is controlled with a data quality standard to formulize its level of detail generation. A point-additive algorithm is used to iteratively construct the multi-resolution TIN pyramid. The extracted landscape features are also incorporated to constrain the TIN structure, thus preserving the basic morphological shapes of the terrain surface at different levels. During the iterative construction process, the TIN in each layer is seamlessly partitioned based on a virtual node structure, and tiled with a global quadtree structure. Finally, an adaptive tessellation approach is adopted to eliminate terrain cracks in the real-time out-of-core spherical terrain rendering. The experiments undertaken in this study confirmed that the proposed method performs well in multi-resolution terrain representation, and produces high-quality underlying data that satisfy the demands of scientific analysis and evaluation.

  9. Automated IMRT planning with regional optimization using planning scripts

    PubMed Central

    Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.

    2013-01-01

    Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393

  10. Agile parallel bioinformatics workflow management using Pwrake.

    PubMed

    Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro

    2011-09-08

    In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows.

  11. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles may facilitate sharing workflows among the scientific community. Workflows for GATK and Dindel are available at http://github.com/misshie/Workflows. PMID:21899774

  12. Pulmonary airways tree segmentation from CT examinations using adaptive volume of interest

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Kim, Won Pil; Zheng, Bin; Leader, Joseph K.; Pu, Jiantao; Tan, Jun; Gur, David

    2009-02-01

    Airways tree segmentation is an important step in quantitatively assessing the severity of and changes in several lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis. It can also be used in guiding bronchoscopy. The purpose of this study is to develop an automated scheme for segmenting the airways tree structure depicted on chest CT examinations. After lung volume segmentation, the scheme defines the first cylinder-like volume of interest (VOI) using a series of images depicting the trachea. The scheme then iteratively defines and adds subsequent VOIs using a region growing algorithm combined with adaptively determined thresholds in order to trace possible sections of airways located inside the combined VOI in question. The airway tree segmentation process is automatically terminated after the scheme assesses all defined VOIs in the iteratively assembled VOI list. In this preliminary study, ten CT examinations with 1.25mm section thickness and two different CT image reconstruction kernels ("bone" and "standard") were selected and used to test the proposed airways tree segmentation scheme. The experiment results showed that (1) adopting this approach affectively prevented the scheme from infiltrating into the parenchyma, (2) the proposed method reasonably accurately segmented the airways trees with lower false positive identification rate as compared with other previously reported schemes that are based on 2-D image segmentation and data analyses, and (3) the proposed adaptive, iterative threshold selection method for the region growing step in each identified VOI enables the scheme to segment the airways trees reliably to the 4th generation in this limited dataset with successful segmentation up to the 5th generation in a fraction of the airways tree branches.

  13. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  14. Strongly Coupled Fluid-Body Dynamics in the Immersed Boundary Projection Method

    NASA Astrophysics Data System (ADS)

    Wang, Chengjie; Eldredge, Jeff D.

    2014-11-01

    A computational algorithm is developed to simulate dynamically coupled interaction between fluid and rigid bodies. The basic computational framework is built upon a multi-domain immersed boundary method library, whirl, developed in previous work. In this library, the Navier-Stokes equations for incompressible flow are solved on a uniform Cartesian grid by the vorticity-based immersed boundary projection method of Colonius and Taira. A solver for the dynamics of rigid-body systems is also included. The fluid and rigid-body solvers are strongly coupled with an iterative approach based on the block Gauss-Seidel method. Interfacial force, with its intimate connection with the Lagrange multipliers used in the fluid solver, is used as the primary iteration variable. Relaxation, developed from a stability analysis of the iterative scheme, is used to achieve convergence in only 2-4 iterations per time step. Several two- and three-dimensional numerical tests are conducted to validate and demonstrate the method, including flapping of flexible wings, self-excited oscillations of a system of linked plates and three-dimensional propulsion of flexible fluked tail. This work has been supported by AFOSR, under Award FA9550-11-1-0098.

  15. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  16. Nonlinear relaxation algorithms for circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, R.A.

    Circuit simulation is an important Computer-Aided Design (CAD) tool in the design of Integrated Circuits (IC). However, the standard techniques used in programs such as SPICE result in very long computer-run times when applied to large problems. In order to reduce the overall run time, a number of new approaches to circuit simulation were developed and are described. These methods are based on nonlinear relaxation techniques and exploit the relative inactivity of large circuits. Simple waveform-processing techniques are described to determine the maximum possible speed improvement that can be obtained by exploiting this property of large circuits. Three simulation algorithmsmore » are described, two of which are based on the Iterated Timing Analysis (ITA) method and a third based on the Waveform-Relaxation Newton (WRN) method. New programs that incorporate these techniques were developed and used to simulate a variety of industrial circuits. The results from these simulations are provided. The techniques are shown to be much faster than the standard approach. In addition, a number of parallel aspects of these algorithms are described, and a general space-time model of parallel-task scheduling is developed.« less

  17. Conjugate gradient coupled with multigrid for an indefinite problem

    NASA Technical Reports Server (NTRS)

    Gozani, J.; Nachshon, A.; Turkel, E.

    1984-01-01

    An iterative algorithm for the Helmholtz equation is presented. This scheme was based on the preconditioned conjugate gradient method for the normal equations. The preconditioning is one cycle of a multigrid method for the discrete Laplacian. The smoothing algorithm is red-black Gauss-Seidel and is constructed so it is a symmetric operator. The total number of iterations needed by the algorithm is independent of h. By varying the number of grids, the number of iterations depends only weakly on k when k(3)h(2) is constant. Comparisons with a SSOR preconditioner are presented.

  18. Iterative method of construction of a bifurcation diagram of autorotation motions for a system with one degree of freedom

    NASA Astrophysics Data System (ADS)

    Klimina, L. A.

    2018-05-01

    The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.

  19. Optimized iterative decoding method for TPC coded CPM

    NASA Astrophysics Data System (ADS)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  20. 3D near-to-surface conductivity reconstruction by inversion of VETEM data using the distorted Born iterative method

    USGS Publications Warehouse

    Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.

    2004-01-01

    Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.

  1. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  2. Novel approach in k0-NAA for highly concentrated REE Samples.

    PubMed

    Abdollahi Neisiani, M; Latifi, M; Chaouki, J; Chilian, C

    2018-04-01

    The present paper presents a new approach for k 0 -NAA for accurate quantification with short turnaround analysis times for rare earth elements (REEs) in high content mineral matrices. REE k 0 and Q 0 values, spectral interferences and nuclear interferences were experimentally evaluated and improved with Alfa Aesar Specpure Plasma Standard 1000mgkg -1 mono-rare earth solutions. The new iterative gamma-ray self-attenuation and neutron self-shielding methods were investigated with powder standards prepared from 100mg of 99.9% Alfa Aesar mono rare earth oxide diluted with silica oxide. The overall performance of the new k 0 -NAA method for REEs was validated using a certified reference material (CRM) from Canadian Certified Reference Materials Project (REE-2) with REE content ranging from 7.2mgkg -1 for Yb to 9610mgkg -1 for Ce. The REE concentration was determined with uncertainty below 7% (at 95% confidence level) and proved good consistency with the CRM certified concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. CT of the chest with model-based, fully iterative reconstruction: comparison with adaptive statistical iterative reconstruction.

    PubMed

    Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime

    2013-08-09

    The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p < 0.0005), low-dose FBP CT (30.9 ± 3.9 HU, p < 0.0005), and standard-dose FBP CT (16.6 ± 2.3 HU, p < 0.0005). MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT without compromising image quality. With radiation dose reduction of >70%, MBIR can provide equivalent lesion detectability of standard-dose FBP CT.

  4. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  5. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  6. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  7. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  8. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  9. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  10. SU-D-206-04: Iterative CBCT Scatter Shading Correction Without Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Y; Wu, P; Mao, T

    2016-06-15

    Purpose: To estimate and remove the scatter contamination in the acquired projection of cone-beam CT (CBCT), to suppress the shading artifacts and improve the image quality without prior information. Methods: The uncorrected CBCT images containing shading artifacts are reconstructed by applying the standard FDK algorithm on CBCT raw projections. The uncorrected image is then segmented to generate an initial template image. To estimate scatter signal, the differences are calculated by subtracting the simulated projections of the template image from the raw projections. Since scatter signals are dominantly continuous and low-frequency in the projection domain, they are estimated by low-pass filteringmore » the difference signals and subtracted from the raw CBCT projections to achieve the scatter correction. Finally, the corrected CBCT image is reconstructed from the corrected projection data. Since an accurate template image is not readily segmented from the uncorrected CBCT image, the proposed scheme is iterated until the produced template is not altered. Results: The proposed scheme is evaluated on the Catphan©600 phantom data and CBCT images acquired from a pelvis patient. The result shows that shading artifacts have been effectively suppressed by the proposed method. Using multi-detector CT (MDCT) images as reference, quantitative analysis is operated to measure the quality of corrected images. Compared to images without correction, the method proposed reduces the overall CT number error from over 200 HU to be less than 50 HU and can increase the spatial uniformity. Conclusion: An iterative strategy without relying on the prior information is proposed in this work to remove the shading artifacts due to scatter contamination in the projection domain. The method is evaluated in phantom and patient studies and the result shows that the image quality is remarkably improved. The proposed method is efficient and practical to address the poor image quality issue of CBCT images. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  11. Some issues related to the novel spectral acceleration method for the fast computation of radiation/scattering from one-dimensional extremely large scale quasi-planar structures

    NASA Astrophysics Data System (ADS)

    Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng

    2002-03-01

    The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.

  12. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  13. Adaptive relaxation for the steady-state analysis of Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1994-01-01

    We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.

  14. Determination of the mechanical indices of composite materials by testing multilayered samples

    NASA Astrophysics Data System (ADS)

    Teregulov, I. G.; Kayumov, R. A.; Butenko, Yu. I.; Safiullin, D. Kh.

    1996-09-01

    The mechanical indices of materials such as composites reinforced by braids and ribbons are difficult to determine by standard methods. Moreover, the indices of such materials may be greatly altered when they are converted into various structures. It has been suggested that these indices be determined by analyzing the structure testing data. The determination of linear and nonlinear elastic, plastic, and viscoelastic parameters of reinforced materials is discussed. A very simple structure, namely, cylindrical shells made by symmetric winding, is studied as an example. Equilibrium conditions are used to obtain resolving equations relative to the mechanical indices. Convergence of iteration methods is examined. The effect of the scatter of experimental data on the calculated results is analyzed.

  15. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  16. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  17. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  18. Three dimensional iterative beam propagation method for optical waveguide devices

    NASA Astrophysics Data System (ADS)

    Ma, Changbao; Van Keuren, Edward

    2006-10-01

    The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.

  19. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Fragment-based quantitative structure-activity relationship (FB-QSAR) for fragment-based drug design.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Wei, Yu-Tuo; Pang, Zong-Wen; Du, Li-Qin; Chou, Kuo-Chen

    2009-01-30

    In cooperation with the fragment-based design a new drug design method, the so-called "fragment-based quantitative structure-activity relationship" (FB-QSAR) is proposed. The essence of the new method is that the molecular framework in a family of drug candidates are divided into several fragments according to their substitutes being investigated. The bioactivities of molecules are correlated with the physicochemical properties of the molecular fragments through two sets of coefficients in the linear free energy equations. One coefficient set is for the physicochemical properties and the other for the weight factors of the molecular fragments. Meanwhile, an iterative double least square (IDLS) technique is developed to solve the two sets of coefficients in a training data set alternately and iteratively. The IDLS technique is a feedback procedure with machine learning ability. The standard Two-dimensional quantitative structure-activity relationship (2D-QSAR) is a special case, in the FB-QSAR, when the whole molecule is treated as one entity. The FB-QSAR approach can remarkably enhance the predictive power and provide more structural insights into rational drug design. As an example, the FB-QSAR is applied to build a predictive model of neuraminidase inhibitors for drug development against H5N1 influenza virus. (c) 2008 Wiley Periodicals, Inc.

  1. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  2. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    PubMed

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  3. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  4. Designing gradient coils with reduced hot spot temperatures.

    PubMed

    While, Peter T; Forbes, Larry K; Crozier, Stuart

    2010-03-01

    Gradient coil temperature is an important concern in the design and construction of MRI scanners. Closely spaced gradient coil windings cause temperature hot spots within the system as a result of Ohmic heating associated with large current being driven through resistive material, and can strongly affect the performance of the coils. In this paper, a model is presented for predicting the spatial temperature distribution of a gradient coil, including the location and extent of temperature hot spots. Subsequently, a method is described for designing gradient coils with improved temperature distributions and reduced hot spot temperatures. Maximum temperature represents a non-linear constraint and a relaxed fixed point iteration routine is proposed to adjust coil windings iteratively to minimise this coil feature. Several examples are considered that assume different thermal material properties and cooling mechanisms for the gradient system. Coil winding solutions are obtained for all cases considered that display a considerable drop in hot spot temperature (>20%) when compared to standard minimum power gradient coils with equivalent gradient homogeneity, efficiency and inductance. The method is semi-analytical in nature and can be adapted easily to consider other non-linear constraints in the design of gradient coils or similar systems. Crown Copyright (c) 2009. Published by Elsevier Inc. All rights reserved.

  5. Evolution of Quantitative Measures in NMR: Quantum Mechanical qHNMR Advances Chemical Standardization of a Red Clover (Trifolium pratense) Extract

    PubMed Central

    2017-01-01

    Chemical standardization, along with morphological and DNA analysis ensures the authenticity and advances the integrity evaluation of botanical preparations. Achievement of a more comprehensive, metabolomic standardization requires simultaneous quantitation of multiple marker compounds. Employing quantitative 1H NMR (qHNMR), this study determined the total isoflavone content (TIfCo; 34.5–36.5% w/w) via multimarker standardization and assessed the stability of a 10-year-old isoflavone-enriched red clover extract (RCE). Eleven markers (nine isoflavones, two flavonols) were targeted simultaneously, and outcomes were compared with LC-based standardization. Two advanced quantitative measures in qHNMR were applied to derive quantities from complex and/or overlapping resonances: a quantum mechanical (QM) method (QM-qHNMR) that employs 1H iterative full spin analysis, and a non-QM method that uses linear peak fitting algorithms (PF-qHNMR). A 10 min UHPLC-UV method provided auxiliary orthogonal quantitation. This is the first systematic evaluation of QM and non-QM deconvolution as qHNMR quantitation measures. It demonstrates that QM-qHNMR can account successfully for the complexity of 1H NMR spectra of individual analytes and how QM-qHNMR can be built for mixtures such as botanical extracts. The contents of the main bioactive markers were in good agreement with earlier HPLC-UV results, demonstrating the chemical stability of the RCE. QM-qHNMR advances chemical standardization by its inherent QM accuracy and the use of universal calibrants, avoiding the impractical need for identical reference materials. PMID:28067513

  6. Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem

    NASA Astrophysics Data System (ADS)

    Lapin, A.; Romanenko, A.

    2016-11-01

    The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.

  7. Professional Teaching Standards: A Comparative Analysis of Their History, Implementation and Efficacy

    ERIC Educational Resources Information Center

    Call, Kairen

    2018-01-01

    Since the publication of results from the first iteration of testing within the Programme for International Student Assessment (PISA) and the ensuing media consternation and political rhetoric about teacher quality in education systems around the world, professional standards for teachers have been considered, developed and implemented globally in…

  8. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  9. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  10. Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.

    PubMed

    Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao

    2017-04-01

    Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.

  11. Stability of the iterative solutions of integral equations as one phase freezing criterion.

    PubMed

    Fantoni, R; Pastore, G

    2003-10-01

    A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.

  12. An iterative method for systems of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1989-01-01

    An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.

  13. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreyev, A.

    Purpose: Compton cameras (CCs) use electronic collimation to reconstruct the images of activity distribution. Although this approach can greatly improve imaging efficiency, due to complex geometry of the CC principle, image reconstruction with the standard iterative algorithms, such as ordered subset expectation maximization (OSEM), can be very time-consuming, even more so if resolution recovery (RR) is implemented. We have previously shown that the origin ensemble (OE) algorithm can be used for the reconstruction of the CC data. Here we propose a method of extending our OE algorithm to include RR. Methods: To validate the proposed algorithm we used Monte Carlomore » simulations of a CC composed of multiple layers of pixelated CZT detectors and designed for imaging small animals. A series of CC acquisitions of small hot spheres and the Derenzo phantom placed in air were simulated. Images obtained from (a) the exact data, (b) blurred data but reconstructed without resolution recovery, and (c) blurred and reconstructed with resolution recovery were compared. Furthermore, the reconstructed contrast-to-background ratios were investigated using the phantom with nine spheres placed in a hot background. Results: Our simulations demonstrate that the proposed method allows for the recovery of the resolution loss that is due to imperfect accuracy of event detection. Additionally, tests of camera sensitivity corresponding to different detector configurations demonstrate that the proposed CC design has sensitivity comparable to PET. When the same number of events were considered, the computation time per iteration increased only by a factor of 2 when OE reconstruction with the resolution recovery correction was performed relative to the original OE algorithm. We estimate that the addition of resolution recovery to the OSEM would increase reconstruction times by 2–3 orders of magnitude per iteration. Conclusions: The results of our tests demonstrate the improvement of image resolution provided by the OE reconstructions with resolution recovery. The quality of images and their contrast are similar to those obtained from the OE reconstructions from scans simulated with perfect energy and spatial resolutions.« less

  15. Using an Iterative Mixed-Methods Research Design to Investigate Schools Facing Exceptionally Challenging Circumstances within Trinidad and Tobago

    ERIC Educational Resources Information Center

    De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle

    2017-01-01

    In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…

  16. A Study of Morrison's Iterative Noise Removal Method. Final Report M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.; Wright, K. A. R.

    1985-01-01

    Morrison's iterative noise removal method is studied by characterizing its effect upon systems of differing noise level and response function. The nature of data acquired from a linear shift invariant instrument is discussed so as to define the relationship between the input signal, the instrument response function, and the output signal. Fourier analysis is introduced, along with several pertinent theorems, as a tool to more thorough understanding of the nature of and difficulties with deconvolution. In relation to such difficulties the necessity of a noise removal process is discussed. Morrison's iterative noise removal method and the restrictions upon its application are developed. The nature of permissible response functions is discussed, as is the choice of the response functions used.

  17. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations.

    PubMed

    Aviat, Félix; Levitt, Antoine; Stamm, Benjamin; Maday, Yvon; Ren, Pengyu; Ponder, Jay W; Lagardère, Louis; Piquemal, Jean-Philip

    2017-01-10

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration ("peek"), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations.

  18. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations

    PubMed Central

    2016-01-01

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration (“peek”), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations. PMID:28068773

  19. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  20. Low-Cost 3-D Flow Estimation of Blood With Clutter.

    PubMed

    Wei, Siyuan; Yang, Ming; Zhou, Jian; Sampson, Richard; Kripfgans, Oliver D; Fowlkes, J Brian; Wenisch, Thomas F; Chakrabarti, Chaitali

    2017-05-01

    Volumetric flow rate estimation is an important ultrasound medical imaging modality that is used for diagnosing cardiovascular diseases. Flow rates are obtained by integrating velocity estimates over a cross-sectional plane. Speckle tracking is a promising approach that overcomes the angle dependency of traditional Doppler methods, but suffers from poor lateral resolution. Recent work improves lateral velocity estimation accuracy by reconstructing a synthetic lateral phase (SLP) signal. However, the estimation accuracy of such approaches is compromised by the presence of clutter. Eigen-based clutter filtering has been shown to be effective in removing the clutter signal; but it is computationally expensive, precluding its use at high volume rates. In this paper, we propose low-complexity schemes for both velocity estimation and clutter filtering. We use a two-tiered motion estimation scheme to combine the low complexity sum-of-absolute-difference and SLP methods to achieve subpixel lateral accuracy. We reduce the complexity of eigen-based clutter filtering by processing in subgroups and replacing singular value decomposition with less compute-intensive power iteration and subspace iteration methods. Finally, to improve flow rate estimation accuracy, we use kernel power weighting when integrating the velocity estimates. We evaluate our method for fast- and slow-moving clutter for beam-to-flow angles of 90° and 60° using Field II simulations, demonstrating high estimation accuracy across scenarios. For instance, for a beam-to-flow angle of 90° and fast-moving clutter, our estimation method provides a bias of -8.8% and standard deviation of 3.1% relative to the actual flow rate.

Top