Sample records for iterative methods

  1. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  2. Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions

    DTIC Science & Technology

    1977-09-01

    Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to

  3. Variational iteration method — a promising technique for constructing equivalent integral equations of fractional order

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru

    2013-10-01

    The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.

  4. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  5. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  6. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  7. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  8. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  9. Solution of Cubic Equations by Iteration Methods on a Pocket Calculator

    ERIC Educational Resources Information Center

    Bamdad, Farzad

    2004-01-01

    A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.

  10. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  11. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  12. Fourth-order numerical solutions of diffusion equation by using SOR method with Crank-Nicolson approach

    NASA Astrophysics Data System (ADS)

    Muhiddin, F. A.; Sulaiman, J.

    2017-09-01

    The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.

  13. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  14. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  15. High resolution x-ray CMT: Reconstruction methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.K.

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less

  16. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  17. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  18. Annual Copper Mountain Conferences on Multigrid and Iterative Methods, Copper Mountain, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCormick, Stephen F.

    This project supported the Copper Mountain Conference on Multigrid and Iterative Methods, held from 2007 to 2015, at Copper Mountain, Colorado. The subject of the Copper Mountain Conference Series alternated between Multigrid Methods in odd-numbered years and Iterative Methods in even-numbered years. Begun in 1983, the Series represents an important forum for the exchange of ideas in these two closely related fields. This report describes the Copper Mountain Conference on Multigrid and Iterative Methods, 2007-2015. Information on the conference series is available at http://grandmaster.colorado.edu/~copper/.

  19. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  20. Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.

    PubMed

    Skariah, Deepak G; Arigovindan, Muthuvel

    2017-06-19

    We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.

  1. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  2. Comparing direct and iterative equation solvers in a large structural analysis software system

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1991-01-01

    Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.

  3. Upwind relaxation methods for the Navier-Stokes equations using inner iterations

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.

    1992-01-01

    A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.

  4. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    PubMed

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem

    DTIC Science & Technology

    2013-02-01

    obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where

  6. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  7. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  8. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  9. Nested Krylov methods and preserving the orthogonality

    NASA Technical Reports Server (NTRS)

    Desturler, Eric; Fokkema, Diederik R.

    1993-01-01

    Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.

  10. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  11. Reducing the latency of the Fractal Iterative Method to half an iteration

    NASA Astrophysics Data System (ADS)

    Béchet, Clémentine; Tallon, Michel

    2013-12-01

    The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.

  12. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  13. A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures

    PubMed Central

    2014-01-01

    Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954

  14. Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2011-01-01

    A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.

  15. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  16. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  17. Parallel iterative methods for sparse linear and nonlinear equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.

  18. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  19. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  20. An accelerated subspace iteration for eigenvector derivatives

    NASA Technical Reports Server (NTRS)

    Ting, Tienko

    1991-01-01

    An accelerated subspace iteration method for calculating eigenvector derivatives has been developed. Factors affecting the effectiveness and the reliability of the subspace iteration are identified, and effective strategies concerning these factors are presented. The method has been implemented, and the results of a demonstration problem are presented.

  1. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  2. Parallelized implicit propagators for the finite-difference Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Parker, Jonathan; Taylor, K. T.

    1995-08-01

    We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.

  3. SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Gong, S

    2016-06-15

    Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less

  4. Iteration, Not Induction

    ERIC Educational Resources Information Center

    Dobbs, David E.

    2009-01-01

    The main purpose of this note is to present and justify proof via iteration as an intuitive, creative and empowering method that is often available and preferable as an alternative to proofs via either mathematical induction or the well-ordering principle. The method of iteration depends only on the fact that any strictly decreasing sequence of…

  5. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  6. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  7. Asymptotic (h tending to infinity) absolute stability for BDFs applied to stiff differential equations. [Backward Differentiation Formulas

    NASA Technical Reports Server (NTRS)

    Krogh, F. T.; Stewart, K.

    1984-01-01

    Methods based on backward differentiation formulas (BDFs) for solving stiff differential equations require iterating to approximate the solution of the corrector equation on each step. One hope for reducing the cost of this is to make do with iteration matrices that are known to have errors and to do no more iterations than are necessary to maintain the stability of the method. This paper, following work by Klopfenstein, examines the effect of errors in the iteration matrix on the stability of the method. Application of the results to an algorithm is discussed briefly.

  8. A Build-Up Interior Method for Linear Programming: Affine Scaling Form

    DTIC Science & Technology

    1990-02-01

    initiating a major iteration imply convergence in a finite number of iterations. Each iteration t of the Dikin algorithm starts with an interior dual...this variant with the affine scaling method of Dikin [5] (in dual form). We have also looked into the analogous variant for the related Karmarkar’s...4] G. B. Dantzig, Linear Programming and Extensions (Princeton University Press, Princeton, NJ, 1963). [5] I. I. Dikin , "Iterative solution of

  9. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  10. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  11. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  12. Iteration with Spreadsheets.

    ERIC Educational Resources Information Center

    Smith, Michael

    1990-01-01

    Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)

  13. Iterative methods for tomography problems: implementation to a cross-well tomography problem

    NASA Astrophysics Data System (ADS)

    Karadeniz, M. F.; Weber, G. W.

    2018-01-01

    The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.

  14. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  15. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  16. A New Newton-Like Iterative Method for Roots of Analytic Functions

    ERIC Educational Resources Information Center

    Otolorin, Olayiwola

    2005-01-01

    A new Newton-like iterative formula for the solution of non-linear equations is proposed. To derive the formula, the convergence criteria of the one-parameter iteration formula, and also the quasilinearization in the derivation of Newton's formula are reviewed. The result is a new formula which eliminates the limitations of other methods. There is…

  17. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  18. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  19. A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.

  20. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  1. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  2. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  3. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  4. Perturbation-iteration theory for analyzing microwave striplines

    NASA Technical Reports Server (NTRS)

    Kretch, B. E.

    1985-01-01

    A perturbation-iteration technique is presented for determining the propagation constant and characteristic impedance of an unshielded microstrip transmission line. The method converges to the correct solution with a few iterations at each frequency and is equivalent to a full wave analysis. The perturbation-iteration method gives a direct solution for the propagation constant without having to find the roots of a transcendental dispersion equation. The theory is presented in detail along with numerical results for the effective dielectric constant and characteristic impedance for a wide range of substrate dielectric constants, stripline dimensions, and frequencies.

  5. Steady state numerical solutions for determining the location of MEMS on projectile

    NASA Astrophysics Data System (ADS)

    Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.

    2018-03-01

    This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.

  6. Material nonlinear analysis via mixed-iterative finite element method

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1992-01-01

    The performance of elastic-plastic mixed-iterative analysis is examined through a set of convergence studies. Membrane and bending behaviors are tested using 4-node quadrilateral finite elements. The membrane result is excellent, which indicates the implementation of elastic-plastic mixed-iterative analysis is appropriate. On the other hand, further research to improve bending performance of the method seems to be warranted.

  7. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  8. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  9. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  10. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems

    DOE PAGES

    Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...

    2017-03-05

    Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.

  11. A non-iterative extension of the multivariate random effects meta-analysis.

    PubMed

    Makambi, Kepher H; Seung, Hyunuk

    2015-01-01

    Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.

  12. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    PubMed Central

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  13. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  14. Validation and application of auxiliary density perturbation theory and non-iterative approximation to coupled-perturbed Kohn-Sham approach for calculation of dipole-quadrupole polarizability

    NASA Astrophysics Data System (ADS)

    Shedge, Sapana V.; Pal, Sourav; Köster, Andreas M.

    2011-07-01

    Recently, two non-iterative approaches have been proposed to calculate response properties within density functional theory (DFT). These approaches are auxiliary density perturbation theory (ADPT) and the non-iterative approach to the coupled-perturbed Kohn-Sham (NIA-CPKS) method. Though both methods are non-iterative, they use different techniques to obtain the perturbed Kohn-Sham matrix. In this Letter, for the first time, both of these two independent methods have been used for the calculation of dipole-quadrupole polarizabilities. To validate these methods, three tetrahedral molecules viz., P4,CH4 and adamantane (C10H16) have been used as examples. The comparison with MP2 and CCSD proves the reliability of the methodology.

  15. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  16. Iterative methods for elliptic finite element equations on general meshes

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.; Choudhury, Shenaz

    1986-01-01

    Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c

    This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.

  18. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  19. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  20. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  1. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  2. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  3. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  4. Iterative solution of the inverse Cauchy problem for an elliptic equation by the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.

    2017-10-01

    This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution

  5. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  6. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    NASA Astrophysics Data System (ADS)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  7. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    PubMed

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  8. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  9. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  10. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  11. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  12. Modules and methods for all photonic computing

    DOEpatents

    Schultz, David R.; Ma, Chao Hung

    2001-01-01

    A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.

  13. Convergence Results on Iteration Algorithms to Linear Systems

    PubMed Central

    Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo

    2014-01-01

    In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640

  14. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Hanquan

    2016-03-01

    In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.

  15. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  16. Use of direct and iterative solvers for estimation of SNP effects in genome-wide selection

    PubMed Central

    2010-01-01

    The aim of this study was to compare iterative and direct solvers for estimation of marker effects in genomic selection. One iterative and two direct methods were used: Gauss-Seidel with Residual Update, Cholesky Decomposition and Gentleman-Givens rotations. For resembling different scenarios with respect to number of markers and of genotyped animals, a simulated data set divided into 25 subsets was used. Number of markers ranged from 1,200 to 5,925 and number of animals ranged from 1,200 to 5,865. Methods were also applied to real data comprising 3081 individuals genotyped for 45181 SNPs. Results from simulated data showed that the iterative solver was substantially faster than direct methods for larger numbers of markers. Use of a direct solver may allow for computing (co)variances of SNP effects. When applied to real data, performance of the iterative method varied substantially, depending on the level of ill-conditioning of the coefficient matrix. From results with real data, Gentleman-Givens rotations would be the method of choice in this particular application as it provided an exact solution within a fairly reasonable time frame (less than two hours). It would indeed be the preferred method whenever computer resources allow its use. PMID:21637627

  17. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  18. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  19. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  20. Calibration and Data Analysis of the MC-130 Air Balance

    NASA Technical Reports Server (NTRS)

    Booth, Dennis; Ulbrich, N.

    2012-01-01

    Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.

  1. Application of the perturbation iteration method to boundary layer type problems.

    PubMed

    Pakdemirli, Mehmet

    2016-01-01

    The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.

  2. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  3. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  4. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  5. Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography

    PubMed Central

    Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A

    2012-01-01

    Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062

  6. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  7. On a new iterative method for solving linear systems and comparison results

    NASA Astrophysics Data System (ADS)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  8. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  9. Non-homogeneous updates for the iterative coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang

    2007-02-01

    Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.

  10. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Development of New Methods for the Solution of Nonlinear Differential Equations by the Method of Lie Series and Extension to New Fields.

    DTIC Science & Technology

    perturbation formulas of Groebner (1960) and Alexseev (1961) for the solution of ordinary differential equations. These formulas are generalized and...iteration methods are given, which include the Methods of Picard, Groebner -Knapp, Poincare, Chen, as special cases. Chapter 3 generalizes an iterated

  12. Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics

    NASA Technical Reports Server (NTRS)

    Seginer, A.; Rusak, Z.; Wasserstrom, E.

    1983-01-01

    Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.

  13. Iterative load-balancing method with multigrid level relaxation for particle simulation with short-range interactions

    NASA Astrophysics Data System (ADS)

    Furuichi, Mikito; Nishiura, Daisuke

    2017-10-01

    We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.

  14. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  15. The preconditioned Gauss-Seidel method faster than the SOR method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Morimoto, Munenori

    2008-09-01

    In recent years, a number of preconditioners have been applied to linear systems [A.D. Gunawardena, S.K. Jain, L. Snyder, Modified iterative methods for consistent linear systems, Linear Algebra Appl. 154-156 (1991) 123-143; T. Kohno, H. Kotakemori, H. Niki, M. Usui, Improving modified Gauss-Seidel method for Z-matrices, Linear Algebra Appl. 267 (1997) 113-123; H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), J. Comput. Appl. Math. 145 (2002) 373-378; H. Kotakemori, H. Niki, N. Okamoto, Accelerated iteration method for Z-matrices, J. Comput. Appl. Math. 75 (1996) 87-97; M. Usui, H. Niki, T.Kohno, Adaptive Gauss-Seidel method for linear systems, Internat. J. Comput. Math. 51(1994)119-125 [10

  16. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  17. Sometimes "Newton's Method" Always "Cycles"

    ERIC Educational Resources Information Center

    Latulippe, Joe; Switkes, Jennifer

    2012-01-01

    Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…

  18. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  19. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  20. Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses

    NASA Astrophysics Data System (ADS)

    Novak, Roman; Vencelj, Matja¿

    2009-12-01

    The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.

  1. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  2. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  3. Finding the Optimal Guidance for Enhancing Anchored Instruction

    ERIC Educational Resources Information Center

    Zydney, Janet Mannheimer; Bathke, Arne; Hasselbring, Ted S.

    2014-01-01

    This study investigated the effect of different methods of guidance with anchored instruction on students' mathematical problem-solving performance. The purpose of this research was to iteratively design a learning environment to find the optimal level of guidance. Two iterations of the software were compared. The first iteration used explicit…

  4. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  5. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  6. Block Gauss elimination followed by a classical iterative method for the solution of linear systems

    NASA Astrophysics Data System (ADS)

    Alanelli, Maria; Hadjidimos, Apostolos

    2004-02-01

    In the last two decades many papers have appeared in which the application of an iterative method for the solution of a linear system is preceded by a step of the Gauss elimination process in the hope that this will increase the rates of convergence of the iterative method. This combination of methods has been proven successful especially when the matrix A of the system is an M-matrix. The purpose of this paper is to extend the idea of one to more Gauss elimination steps, consider other classes of matrices A, e.g., p-cyclic consistently ordered, and generalize and improve the asymptotic convergence rates of some of the methods known so far.

  7. Modified conjugate gradient method for diagonalizing large matrices.

    PubMed

    Jie, Quanlin; Liu, Dunhuan

    2003-11-01

    We present an iterative method to diagonalize large matrices. The basic idea is the same as the conjugate gradient (CG) method, i.e, minimizing the Rayleigh quotient via its gradient and avoiding reintroducing errors to the directions of previous gradients. Each iteration step is to find lowest eigenvector of the matrix in a subspace spanned by the current trial vector and the corresponding gradient of the Rayleigh quotient, as well as some previous trial vectors. The gradient, together with the previous trial vectors, play a similar role as the conjugate gradient of the original CG algorithm. Our numeric tests indicate that this method converges significantly faster than the original CG method. And the computational cost of one iteration step is about the same as the original CG method. It is suitable for first principle calculations.

  8. Adaptive dynamic programming for discrete-time linear quadratic regulation based on multirate generalised policy iteration

    NASA Astrophysics Data System (ADS)

    Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2018-06-01

    In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.

  9. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  10. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  11. Sparse magnetic resonance imaging reconstruction using the bregman iteration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo

    2013-01-01

    Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.

  12. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    NASA Astrophysics Data System (ADS)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  13. Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    NASA Technical Reports Server (NTRS)

    Gaonkar, Gopal H.; Achar, N. S.

    1992-01-01

    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.

  14. Multilevel acceleration of scattering-source iterations with application to electron transport

    DOE PAGES

    Drumm, Clif; Fan, Wesley

    2017-08-18

    Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less

  15. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  16. Computation of nonlinear ultrasound fields using a linearized contrast source method.

    PubMed

    Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A

    2013-08-01

    Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.

  17. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  18. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  19. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  20. Computational trigonometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, K.

    1994-12-31

    By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

  1. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  2. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  3. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  4. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  5. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  6. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  7. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  8. Anderson acceleration and application to the three-temperature energy equations

    NASA Astrophysics Data System (ADS)

    An, Hengbin; Jia, Xiaowei; Walker, Homer F.

    2017-10-01

    The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.

  9. Iterative and multigrid methods in the finite element solution of incompressible and turbulent fluid flow

    NASA Astrophysics Data System (ADS)

    Lavery, N.; Taylor, C.

    1999-07-01

    Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright

  10. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  11. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  12. A study on Marangoni convection by the variational iteration method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Onur; Oturanç, Galip

    2012-09-01

    In this paper, we will consider the use of the variational iteration method and Padé approximant for finding approximate solutions for a Marangoni convection induced flow over a free surface due to an imposed temperature gradient. The solutions are compared with the numerical (fourth-order Runge Kutta) solutions.

  13. Iteration of ultrasound aberration correction methods

    NASA Astrophysics Data System (ADS)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  14. Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities

    DTIC Science & Technology

    2009-02-01

    levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making

  15. An iterative method for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Goldstein, C. I.; Turkel, E.

    1983-01-01

    An iterative algorithm for the solution of the Helmholtz equation is developed. The algorithm is based on a preconditioned conjugate gradient iteration for the normal equations. The preconditioning is based on an SSOR sweep for the discrete Laplacian. Numerical results are presented for a wide variety of problems of physical interest and demonstrate the effectiveness of the algorithm.

  16. An iterative method for obtaining the optimum lightning location on a spherical surface

    NASA Technical Reports Server (NTRS)

    Chao, Gao; Qiming, MA

    1991-01-01

    A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.

  17. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  18. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  19. Nonnegative least-squares image deblurring: improved gradient projection approaches

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.

    2010-02-01

    The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.

  20. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    PubMed

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  1. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  2. Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, Alex; Kelley, C. T.; Slattery, Stuart R

    ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less

  3. Development of an efficient multigrid method for the NEM form of the multigroup neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Al-Chalabi, Rifat M. Khalil

    1997-09-01

    Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)

  4. Anderson Acceleration for Fixed-Point Iterations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Homer F.

    The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.

  5. Modeling design iteration in product design and development and its solution by a novel artificial bee colony algorithm.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.

  6. C–IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    DOE PAGES

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...

    2016-05-03

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  7. Multiple solution of linear algebraic systems by an iterative method with recomputed preconditioner in the analysis of microstrip structures

    NASA Astrophysics Data System (ADS)

    Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.

    2016-06-01

    A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.

  8. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  9. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  10. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  11. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  12. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  13. Discrete fourier transform (DFT) analysis for applications using iterative transform methods

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2012-01-01

    According to various embodiments, a method is provided for determining aberration data for an optical system. The method comprises collecting a data signal, and generating a pre-transformation algorithm. The data is pre-transformed by multiplying the data with the pre-transformation algorithm. A discrete Fourier transform of the pre-transformed data is performed in an iterative loop. The method further comprises back-transforming the data to generate aberration data.

  14. Iterative Methods for Solving Nonlinear Parabolic Problem in Pension Saving Management

    NASA Astrophysics Data System (ADS)

    Koleva, M. N.

    2011-11-01

    In this work we consider a nonlinear parabolic equation, obtained from Riccati like transformation of the Hamilton-Jacobi-Bellman equation, arising in pension saving management. We discuss two numerical iterative methods for solving the model problem—fully implicit Picard method and mixed Picard-Newton method, which preserves the parabolic characteristics of the differential problem. Numerical experiments for comparison the accuracy and effectiveness of the algorithms are discussed. Finally, observations are given.

  15. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  16. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  17. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  18. A non-iterative twin image elimination method with two in-line digital holograms

    NASA Astrophysics Data System (ADS)

    Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young

    2018-02-01

    We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.

  19. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  20. Parallel iterative solution for h and p approximations of the shallow water equations

    USGS Publications Warehouse

    Barragy, E.J.; Walters, R.A.

    1998-01-01

    A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.

  1. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  2. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  3. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  4. Strongly Coupled Fluid-Body Dynamics in the Immersed Boundary Projection Method

    NASA Astrophysics Data System (ADS)

    Wang, Chengjie; Eldredge, Jeff D.

    2014-11-01

    A computational algorithm is developed to simulate dynamically coupled interaction between fluid and rigid bodies. The basic computational framework is built upon a multi-domain immersed boundary method library, whirl, developed in previous work. In this library, the Navier-Stokes equations for incompressible flow are solved on a uniform Cartesian grid by the vorticity-based immersed boundary projection method of Colonius and Taira. A solver for the dynamics of rigid-body systems is also included. The fluid and rigid-body solvers are strongly coupled with an iterative approach based on the block Gauss-Seidel method. Interfacial force, with its intimate connection with the Lagrange multipliers used in the fluid solver, is used as the primary iteration variable. Relaxation, developed from a stability analysis of the iterative scheme, is used to achieve convergence in only 2-4 iterations per time step. Several two- and three-dimensional numerical tests are conducted to validate and demonstrate the method, including flapping of flexible wings, self-excited oscillations of a system of linked plates and three-dimensional propulsion of flexible fluked tail. This work has been supported by AFOSR, under Award FA9550-11-1-0098.

  5. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  6. Conjugate gradient coupled with multigrid for an indefinite problem

    NASA Technical Reports Server (NTRS)

    Gozani, J.; Nachshon, A.; Turkel, E.

    1984-01-01

    An iterative algorithm for the Helmholtz equation is presented. This scheme was based on the preconditioned conjugate gradient method for the normal equations. The preconditioning is one cycle of a multigrid method for the discrete Laplacian. The smoothing algorithm is red-black Gauss-Seidel and is constructed so it is a symmetric operator. The total number of iterations needed by the algorithm is independent of h. By varying the number of grids, the number of iterations depends only weakly on k when k(3)h(2) is constant. Comparisons with a SSOR preconditioner are presented.

  7. Iterative method of construction of a bifurcation diagram of autorotation motions for a system with one degree of freedom

    NASA Astrophysics Data System (ADS)

    Klimina, L. A.

    2018-05-01

    The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.

  8. Optimized iterative decoding method for TPC coded CPM

    NASA Astrophysics Data System (ADS)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  9. 3D near-to-surface conductivity reconstruction by inversion of VETEM data using the distorted Born iterative method

    USGS Publications Warehouse

    Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.

    2004-01-01

    Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.

  10. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  11. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  12. LDPC-based iterative joint source-channel decoding for JPEG2000.

    PubMed

    Pu, Lingling; Wu, Zhenyu; Bilgin, Ali; Marcellin, Michael W; Vasic, Bane

    2007-02-01

    A framework is proposed for iterative joint source-channel decoding of JPEG2000 codestreams. At the encoder, JPEG2000 is used to perform source coding with certain error-resilience (ER) modes, and LDPC codes are used to perform channel coding. During decoding, the source decoder uses the ER modes to identify corrupt sections of the codestream and provides this information to the channel decoder. Decoding is carried out jointly in an iterative fashion. Experimental results indicate that the proposed method requires fewer iterations and improves overall system performance.

  13. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  14. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  15. Using the surface panel method to predict the steady performance of ducted propellers

    NASA Astrophysics Data System (ADS)

    Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long

    2009-12-01

    A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.

  16. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  17. The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Overman, Andrea L.

    1988-01-01

    Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.

  18. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  19. Three dimensional iterative beam propagation method for optical waveguide devices

    NASA Astrophysics Data System (ADS)

    Ma, Changbao; Van Keuren, Edward

    2006-10-01

    The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.

  20. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  1. Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem

    NASA Astrophysics Data System (ADS)

    Lapin, A.; Romanenko, A.

    2016-11-01

    The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.

  2. Leveraging Anderson Acceleration for improved convergence of iterative solutions to transport systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willert, Jeffrey; Taitano, William T.; Knoll, Dana

    In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less

  3. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  4. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  5. An iterative method for systems of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1989-01-01

    An iterative algorithm for the efficient solution of systems of nonlinear hyperbolic equations is presented. Parallelism is evident at several levels. In the formation of the iteration, the equations are decoupled, thereby providing large grain parallelism. Parallelism may also be exploited within the solves for each equation. Convergence of the interation is established via a bounding function argument. Experimental results in two-dimensions are presented.

  6. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  7. Using an Iterative Mixed-Methods Research Design to Investigate Schools Facing Exceptionally Challenging Circumstances within Trinidad and Tobago

    ERIC Educational Resources Information Center

    De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle

    2017-01-01

    In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…

  8. A Study of Morrison's Iterative Noise Removal Method. Final Report M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.; Wright, K. A. R.

    1985-01-01

    Morrison's iterative noise removal method is studied by characterizing its effect upon systems of differing noise level and response function. The nature of data acquired from a linear shift invariant instrument is discussed so as to define the relationship between the input signal, the instrument response function, and the output signal. Fourier analysis is introduced, along with several pertinent theorems, as a tool to more thorough understanding of the nature of and difficulties with deconvolution. In relation to such difficulties the necessity of a noise removal process is discussed. Morrison's iterative noise removal method and the restrictions upon its application are developed. The nature of permissible response functions is discussed, as is the choice of the response functions used.

  9. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations.

    PubMed

    Aviat, Félix; Levitt, Antoine; Stamm, Benjamin; Maday, Yvon; Ren, Pengyu; Ponder, Jay W; Lagardère, Louis; Piquemal, Jean-Philip

    2017-01-10

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration ("peek"), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations.

  10. Truncated Conjugate Gradient: An Optimal Strategy for the Analytical Evaluation of the Many-Body Polarization Energy and Forces in Molecular Simulations

    PubMed Central

    2016-01-01

    We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration (“peek”), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations. PMID:28068773

  11. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  12. Subsonic panel method for designing wing surfaces from pressure distribution

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.; Hawk, J. D.

    1983-01-01

    An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.

  13. Improved Diffuse Foreground Subtraction with the ILC Method: CMB Map and Angular Power Spectrum Using Planck and WMAP Observations

    NASA Astrophysics Data System (ADS)

    Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun

    2017-06-01

    We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.

  14. Analytic approximations of Von Kármán plate under arbitrary uniform pressure—equations in integral form

    NASA Astrophysics Data System (ADS)

    Zhong, XiaoXu; Liao, ShiJun

    2018-01-01

    Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.

  15. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  16. A superlinear iteration method for calculation of finite length journal bearing's static equilibrium position.

    PubMed

    Zhou, Wenjie; Wei, Xuesong; Wang, Leqin; Wu, Guangkuan

    2017-05-01

    Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method-twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length.

  17. Newton's method: A link between continuous and discrete solutions of nonlinear problems

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.

    1980-01-01

    Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.

  18. Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate

    NASA Astrophysics Data System (ADS)

    Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng

    2005-01-01

    In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.

  19. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  20. Quality measures in applications of image restoration.

    PubMed

    Kriete, A; Naim, M; Schafer, L

    2001-01-01

    We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.

  1. Progress in Development of the ITER Plasma Control System Simulation Platform

    NASA Astrophysics Data System (ADS)

    Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel

    2017-10-01

    We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.

  2. A new Newton-like method for solving nonlinear equations.

    PubMed

    Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying

    2016-01-01

    This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.

  3. Application of linear multifrequency-grey acceleration to preconditioned Krylov iterations for thermal radiation transport

    DOE PAGES

    Till, Andrew T.; Warsa, James S.; Morel, Jim E.

    2018-06-15

    The thermal radiative transfer (TRT) equations comprise a radiation equation coupled to the material internal energy equation. Linearization of these equations produces effective, thermally-redistributed scattering through absorption-reemission. In this paper, we investigate the effectiveness and efficiency of Linear-Multi-Frequency-Grey (LMFG) acceleration that has been reformulated for use as a preconditioner to Krylov iterative solution methods. We introduce two general frameworks, the scalar flux formulation (SFF) and the absorption rate formulation (ARF), and investigate their iterative properties in the absence and presence of true scattering. SFF has a group-dependent state size but may be formulated without inner iterations in the presence ofmore » scattering, while ARF has a group-independent state size but requires inner iterations when scattering is present. We compare and evaluate the computational cost and efficiency of LMFG applied to these two formulations using a direct solver for the preconditioners. Finally, this work is novel because the use of LMFG for the radiation transport equation, in conjunction with Krylov methods, involves special considerations not required for radiation diffusion.« less

  4. Modeling Design Iteration in Product Design and Development and Its Solution by a Novel Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584

  5. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  6. Validation of the thermal transport model used for ITER startup scenario predictions with DIII-D experimental data

    DOE PAGES

    Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...

    2010-12-08

    We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less

  7. A superlinear iteration method for calculation of finite length journal bearing's static equilibrium position

    PubMed Central

    Zhou, Wenjie; Wei, Xuesong; Wang, Leqin

    2017-01-01

    Solving the static equilibrium position is one of the most important parts of dynamic coefficients calculation and further coupled calculation of rotor system. The main contribution of this study is testing the superlinear iteration convergence method—twofold secant method, for the determination of the static equilibrium position of journal bearing with finite length. Essentially, the Reynolds equation for stable motion is solved by the finite difference method and the inner pressure is obtained by the successive over-relaxation iterative method reinforced by the compound Simpson quadrature formula. The accuracy and efficiency of the twofold secant method are higher in comparison with the secant method and dichotomy. The total number of iterative steps required for the twofold secant method are about one-third of the secant method and less than one-eighth of dichotomy for the same equilibrium position. The calculations for equilibrium position and pressure distribution for different bearing length, clearance and rotating speed were done. In the results, the eccentricity presents linear inverse proportional relationship to the attitude angle. The influence of the bearing length, clearance and bearing radius on the load-carrying capacity was also investigated. The results illustrate that larger bearing length, larger radius and smaller clearance are good for the load-carrying capacity of journal bearing. The application of the twofold secant method can greatly reduce the computational time for calculation of the dynamic coefficients and dynamic characteristics of rotor-bearing system with a journal bearing of finite length. PMID:28572997

  8. An iterative solver for the 3D Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir

    2017-09-01

    We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.

  9. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  10. Iterative computation of generalized inverses, with an application to CMG steering laws

    NASA Technical Reports Server (NTRS)

    Steincamp, J. W.

    1971-01-01

    A cubically convergent iterative method for computing the generalized inverse of an arbitrary M X N matrix A is developed and a FORTRAN subroutine by which the method was implemented for real matrices on a CDC 3200 is given, with a numerical example to illustrate accuracy. Application to a redundant single-gimbal CMG assembly steering law is discussed.

  11. A multigrid LU-SSOR scheme for approximate Newton iteration applied to the Euler equations

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Jameson, Antony

    1986-01-01

    A new efficient relaxation scheme in conjunction with a multigrid method is developed for the Euler equations. The LU SSOR scheme is based on a central difference scheme and does not need flux splitting for Newton iteration. Application to transonic flow shows that the new method surpasses the performance of the LU implicit scheme.

  12. Iterative methods used in overlap astrometric reduction techniques do not always converge

    NASA Astrophysics Data System (ADS)

    Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.

    1993-04-01

    In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.

  13. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  14. Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems

    NASA Technical Reports Server (NTRS)

    Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.

    1993-01-01

    In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).

  15. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  16. 2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.

    2014-03-01

    Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.

  17. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  18. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  19. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  20. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less

  1. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  2. The fractal geometry of Hartree-Fock

    NASA Astrophysics Data System (ADS)

    Theel, Friethjof; Karamatskou, Antonia; Santra, Robin

    2017-12-01

    The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.

  3. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  4. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  5. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    NASA Astrophysics Data System (ADS)

    Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.

    2018-01-01

    We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.

  6. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.

    1990-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  7. High-performance equation solvers and their impact on finite element analysis

    NASA Technical Reports Server (NTRS)

    Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.

    1992-01-01

    The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.

  8. Analytical and quasi-Bayesian methods as development of the iterative approach for mixed radiation biodosimetry.

    PubMed

    Słonecka, Iwona; Łukasik, Krzysztof; Fornalski, Krzysztof W

    2018-06-04

    The present paper proposes two methods of calculating components of the dose absorbed by the human body after exposure to a mixed neutron and gamma radiation field. The article presents a novel approach to replace the common iterative method in its analytical form, thus reducing the calculation time. It also shows a possibility of estimating the neutron and gamma doses when their ratio in a mixed beam is not precisely known.

  9. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  10. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    PubMed Central

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2011-01-01

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913

  11. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali

    2011-04-15

    Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less

  12. Picard Trajectory Approximation Iteration for Efficient Orbit Propagation

    DTIC Science & Technology

    2015-07-21

    Eqn (8)) for an iterative ap- proximation of eccentric anomaly, and is transformed back to . The Lambert/ Kepler time – eccentric anomaly relationship...of Kepler motion based on spinor regulari- zation, Journal fur die Reine und Angewandt Mathematik 218, 204-219, 1965. [3] Levi-Civita T., Sur la...transformed back to  , ,x y z . The Lambert/ Kepler time – eccentric anomaly relationship is iterated by a Newton/Secant method to converge on the

  13. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  14. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  15. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  16. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  17. Convergence analysis of modulus-based matrix splitting iterative methods for implicit complementarity problems.

    PubMed

    Wang, An; Cao, Yang; Shi, Quan

    2018-01-01

    In this paper, we demonstrate a complete version of the convergence theory of the modulus-based matrix splitting iteration methods for solving a class of implicit complementarity problems proposed by Hong and Li (Numer. Linear Algebra Appl. 23:629-641, 2016). New convergence conditions are presented when the system matrix is a positive-definite matrix and an [Formula: see text]-matrix, respectively.

  18. TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB

    2016-06-15

    Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less

  19. Experimental validation of an OSEM-type iterative reconstruction algorithm for inverse geometry computed tomography

    NASA Astrophysics Data System (ADS)

    David, Sabrina; Burion, Steve; Tepe, Alan; Wilfley, Brian; Menig, Daniel; Funk, Tobias

    2012-03-01

    Iterative reconstruction methods have emerged as a promising avenue to reduce dose in CT imaging. Another, perhaps less well-known, advance has been the development of inverse geometry CT (IGCT) imaging systems, which can significantly reduce the radiation dose delivered to a patient during a CT scan compared to conventional CT systems. Here we show that IGCT data can be reconstructed using iterative methods, thereby combining two novel methods for CT dose reduction. A prototype IGCT scanner was developed using a scanning beam digital X-ray system - an inverse geometry fluoroscopy system with a 9,000 focal spot x-ray source and small photon counting detector. 90 fluoroscopic projections or "superviews" spanning an angle of 360 degrees were acquired of an anthropomorphic phantom mimicking a 1 year-old boy. The superviews were reconstructed with a custom iterative reconstruction algorithm, based on the maximum-likelihood algorithm for transmission tomography (ML-TR). The normalization term was calculated based on flat-field data acquired without a phantom. 15 subsets were used, and a total of 10 complete iterations were performed. Initial reconstructed images showed faithful reconstruction of anatomical details. Good edge resolution and good contrast-to-noise properties were observed. Overall, ML-TR reconstruction of IGCT data collected by a bench-top prototype was shown to be viable, which may be an important milestone in the further development of inverse geometry CT.

  20. How to Compute Labile Metal-Ligand Equilibria

    ERIC Educational Resources Information Center

    de Levie, Robert

    2007-01-01

    The different methods used for computing labile metal-ligand complexes, which are suitable for an iterative computer solution, are illustrated. The ligand function has allowed students to relegate otherwise tedious iterations to a computer, while retaining complete control over what is calculated.

  1. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  2. More on analyzing the reflection of a laser beam by a deformed highly reflective volume Bragg grating using iteration of the beam propagation method.

    PubMed

    Shu, Hong; Mokhov, Sergiy; Zeldovich, Boris Ya; Bass, Michael

    2009-01-01

    A further extension of the iteration method for beam propagation calculation is presented that can be applied for volume Bragg gratings (VBGs) with extremely large grating strength. A reformulation of the beam propagation formulation is presented for analyzing the reflection of a laser beam by a deformed VBG. These methods will be shown to be very accurate and efficient. A VBG with generic z-dependent distortion has been analyzed using these methods.

  3. Numerical solution of Euler's equation by perturbed functionals

    NASA Technical Reports Server (NTRS)

    Dey, S. K.

    1985-01-01

    A perturbed functional iteration has been developed to solve nonlinear systems. It adds at each iteration level, unique perturbation parameters to nonlinear Gauss-Seidel iterates which enhances its convergence properties. As convergence is approached these parameters are damped out. Local linearization along the diagonal has been used to compute these parameters. The method requires no computation of Jacobian or factorization of matrices. Analysis of convergence depends on properties of certain contraction-type mappings, known as D-mappings. In this article, application of this method to solve an implicit finite difference approximation of Euler's equation is studied. Some representative results for the well known shock tube problem and compressible flows in a nozzle are given.

  4. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  5. An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package

    NASA Astrophysics Data System (ADS)

    Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.

    1989-05-01

    The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.

  6. Improvements to image quality using hybrid and model-based iterative reconstructions: a phantom study.

    PubMed

    Aurumskjöld, Marie-Louise; Ydström, Kristina; Tingberg, Anders; Söderberg, Marcus

    2017-01-01

    The number of computed tomography (CT) examinations is increasing and leading to an increase in total patient exposure. It is therefore important to optimize CT scan imaging conditions in order to reduce the radiation dose. The introduction of iterative reconstruction methods has enabled an improvement in image quality and a reduction in radiation dose. To investigate how image quality depends on reconstruction method and to discuss patient dose reduction resulting from the use of hybrid and model-based iterative reconstruction. An image quality phantom (Catphan® 600) and an anthropomorphic torso phantom were examined on a Philips Brilliance iCT. The image quality was evaluated in terms of CT numbers, noise, noise power spectra (NPS), contrast-to-noise ratio (CNR), low-contrast resolution, and spatial resolution for different scan parameters and dose levels. The images were reconstructed using filtered back projection (FBP) and different settings of hybrid (iDose 4 ) and model-based (IMR) iterative reconstruction methods. iDose 4 decreased the noise by 15-45% compared with FBP depending on the level of iDose 4 . The IMR reduced the noise even further, by 60-75% compared to FBP. The results are independent of dose. The NPS showed changes in the noise distribution for different reconstruction methods. The low-contrast resolution and CNR were improved with iDose 4 , and the improvement was even greater with IMR. There is great potential to reduce noise and thereby improve image quality by using hybrid or, in particular, model-based iterative reconstruction methods, or to lower radiation dose and maintain image quality. © The Foundation Acta Radiologica 2016.

  7. A self-adapting system for the automated detection of inter-ictal epileptiform discharges.

    PubMed

    Lodder, Shaun S; van Putten, Michel J A M

    2014-01-01

    Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.

  8. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  9. A Huygens immersed-finite-element particle-in-cell method for modeling plasma-surface interactions with moving interface

    NASA Astrophysics Data System (ADS)

    Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao

    2018-06-01

    Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.

  10. Dynamics of a new family of iterative processes for quadratic polynomials

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Hernández, M. A.; Romero, N.

    2010-03-01

    In this work we show the presence of the well-known Catalan numbers in the study of the convergence and the dynamical behavior of a family of iterative methods for solving nonlinear equations. In fact, we introduce a family of methods, depending on a parameter . These methods reach the order of convergence m+2 when they are applied to quadratic polynomials with different roots. Newton's and Chebyshev's methods appear as particular choices of the family appear for m=0 and m=1, respectively. We make both analytical and graphical studies of these methods, which give rise to rational functions defined in the extended complex plane. Firstly, we prove that the coefficients of the aforementioned family of iterative processes can be written in terms of the Catalan numbers. Secondly, we make an incursion into its dynamical behavior. In fact, we show that the rational maps related to these methods can be written in terms of the entries of the Catalan triangle. Next we analyze its general convergence, by including some computer plots showing the intricate structure of the Universal Julia sets associated with the methods.

  11. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  12. An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, B.G.

    1999-11-11

    The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.

  13. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  14. A minimization method on the basis of embedding the feasible set and the epigraph

    NASA Astrophysics Data System (ADS)

    Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.

    2016-11-01

    We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.

  15. Convergence of Newton's method for a single real equation

    NASA Technical Reports Server (NTRS)

    Campbell, C. W.

    1985-01-01

    Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.

  16. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  17. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  18. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE PAGES

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara; ...

    2018-02-20

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  19. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  20. From the Rendering Equation to Stratified Light Transport Inversion

    DTIC Science & Technology

    2010-12-09

    iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct

  1. Inspection Methods in Programming.

    DTIC Science & Technology

    1981-06-01

    Counting is a a specialization of Iterative-generation in which the generating function is Oneplus ) Waters’ second category of plan building method...is Oneplus and the initial input is 1. 0 I 180 CHAPTER NINE -ta a acio f igr9-.IeaieGnrtoPln 7 -7 STEADY STATE PLANS 181 TemporalPlan counting...specializalion iterative-generation roles .action(afu nction) ,tail(counting) conslraints .action.op = oneplus A .action.input = 1 The lItcrative-application

  2. Numerical calculation of the internal flow field in a centrifugal compressor impeller

    NASA Technical Reports Server (NTRS)

    Walitt, L.; Harp, J. L., Jr.; Liu, C. Y.

    1975-01-01

    An iterative numerical method has been developed for the calculation of steady, three-dimensional, viscous, compressible flow fields in centrifugal compressor impellers. The computer code, which embodies the method, solves the steady three dimensional, compressible Navier-Stokes equations in rotating, curvilinear coordinates. The solution takes place on blade-to-blade surfaces of revolution which move from the hub to the shroud during each iteration.

  3. Maxwell iteration for the lattice Boltzmann method with diffusive scaling

    NASA Astrophysics Data System (ADS)

    Zhao, Weifeng; Yong, Wen-An

    2017-03-01

    In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.

  4. Autonomous Method and System for Minimizing the Magnitude of Plasma Discharge Current Oscillations in a Hall Effect Plasma Device

    NASA Technical Reports Server (NTRS)

    Hruby, Vladimir (Inventor); Demmons, Nathaniel (Inventor); Ehrbar, Eric (Inventor); Pote, Bruce (Inventor); Rosenblad, Nathan (Inventor)

    2014-01-01

    An autonomous method for minimizing the magnitude of plasma discharge current oscillations in a Hall effect plasma device includes iteratively measuring plasma discharge current oscillations of the plasma device and iteratively adjusting the magnet current delivered to the plasma device in response to measured plasma discharge current oscillations to reduce the magnitude of the plasma discharge current oscillations.

  5. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  6. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  7. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  8. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  9. Solution of Dirac equation for Eckart potential and trigonometric Manning Rosen potential using asymptotic iteration method

    NASA Astrophysics Data System (ADS)

    Resita Arum, Sari; A, Suparmi; C, Cari

    2016-01-01

    The Dirac equation for Eckart potential and trigonometric Manning Rosen potential with exact spin symmetry is obtained using an asymptotic iteration method. The combination of the two potentials is substituted into the Dirac equation, then the variables are separated into radial and angular parts. The Dirac equation is solved by using an asymptotic iteration method that can reduce the second order differential equation into a differential equation with substitution variables of hypergeometry type. The relativistic energy is calculated using Matlab 2011. This study is limited to the case of spin symmetry. With the asymptotic iteration method, the energy spectra of the relativistic equations and equations of orbital quantum number l can be obtained, where both are interrelated between quantum numbers. The energy spectrum is also numerically solved using the Matlab software, where the increase in the radial quantum number nr causes the energy to decrease. The radial part and the angular part of the wave function are defined as hypergeometry functions and visualized with Matlab 2011. The results show that the disturbance of a combination of the Eckart potential and trigonometric Manning Rosen potential can change the radial part and the angular part of the wave function. Project supported by the Higher Education Project (Grant No. 698/UN27.11/PN/2015).

  10. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  11. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  12. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  13. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.

    PubMed

    Zelyak, O; Fallone, B G; St-Aubin, J

    2017-12-14

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  14. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-01-01

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  15. Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".

    PubMed

    Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel

    2018-03-12

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.

  16. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  17. ITER-FEAT operation

    NASA Astrophysics Data System (ADS)

    Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams

    2001-03-01

    ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.

  18. A response to Yu et al. "A forward-backward fragment assembling algorithm for the identification of genomic amplification and deletion breakpoints using high-density single nucleotide polymorphism (SNP) array", BMC Bioinformatics 2007, 8: 145.

    PubMed

    Rueda, Oscar M; Diaz-Uriarte, Ramon

    2007-10-16

    Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).

  19. Development and Evaluation of an Intuitive Operations Planning Process

    DTIC Science & Technology

    2006-03-01

    designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design

  20. The second iteration of the Systems Prioritization Method: A systems prioritization and decision-aiding tool for the Waste Isolation Pilot Plant: Volume 2, Summary of technical input and model implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prindle, N.H.; Mendenhall, F.T.; Trauth, K.

    1996-05-01

    The Systems Prioritization Method (SPM) is a decision-aiding tool developed by Sandia National Laboratories (SNL). SPM provides an analytical basis for supporting programmatic decisions for the Waste Isolation Pilot Plant (WIPP) to meet selected portions of the applicable US EPA long-term performance regulations. The first iteration of SPM (SPM-1), the prototype for SPM< was completed in 1994. It served as a benchmark and a test bed for developing the tools needed for the second iteration of SPM (SPM-2). SPM-2, completed in 1995, is intended for programmatic decision making. This is Volume II of the three-volume final report of the secondmore » iteration of the SPM. It describes the technical input and model implementation for SPM-2, and presents the SPM-2 technical baseline and the activities, activity outcomes, outcome probabilities, and the input parameters for SPM-2 analysis.« less

  1. Finite-beta equilibria for Wendelstein 7-X configurations using the Princeton Iterative Equilibrium Solver code

    NASA Astrophysics Data System (ADS)

    Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.

    1999-04-01

    Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.

  2. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    PubMed

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  3. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  4. Large Deviations and Quasipotential for Finite State Mean Field Interacting Particle Systems

    DTIC Science & Technology

    2014-05-01

    The conclusion then follows by applying Lemma 4.4.2. 132 119 4.4.1 Iterative solver: The widest neighborhood structure We employ Gauss - Seidel ...nearest neighborhood structure described in Section 4.4.2. We use Gauss - Seidel iterative method for our numerical experiments. The Gauss - Seidel ...x ∈ Bh, M x ∈ Sh\\Bh, where M ∈ (V,∞) is a very large number, so that the iteration (4.5.1) converges quickly. For simplicity, we restrict our

  5. The Battlefield Environment Division Modeling Framework (BMF). Part 1: Optimizing the Atmospheric Boundary Layer Environment Model for Cluster Computing

    DTIC Science & Technology

    2014-02-01

    idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2

  6. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  7. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  8. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  9. Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures

    NASA Astrophysics Data System (ADS)

    Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan

    2016-10-01

    We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.

  10. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  11. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  12. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  13. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE PAGES

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao; ...

    2017-09-14

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  14. An Iterative Decambering Approach for Post-Stall Prediction of Wing Characteristics using known Section Data

    NASA Technical Reports Server (NTRS)

    Mukherjee, Rinku; Gopalarathnam, Ashok; Kim, Sung Wan

    2003-01-01

    An iterative decambering approach for the post stall prediction of wings using known section data as inputs is presented. The method can currently be used for incompressible .ow and can be extended to compressible subsonic .ow using Mach number correction schemes. A detailed discussion of past work on this topic is presented first. Next, an overview of the decambering approach is presented and is illustrated by applying the approach to the prediction of the two-dimensional C(sub l) and C(sub m) curves for an airfoil. The implementation of the approach for iterative decambering of wing sections is then discussed. A novel feature of the current e.ort is the use of a multidimensional Newton iteration for taking into consideration the coupling between the di.erent sections of the wing. The approach lends itself to implementation in a variety of finite-wing analysis methods such as lifting-line theory, discrete-vortex Weissinger's method, and vortex lattice codes. Results are presented for a rectangular wing for a from 0 to 25 deg. The results are compared for both increasing and decreasing directions of a, and they show that a hysteresis loop can be predicted for post-stall angles of attack.

  15. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  16. Blind motion image deblurring using nonconvex higher-order total variation model

    NASA Astrophysics Data System (ADS)

    Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo

    2016-09-01

    We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.

  17. Accelerating nuclear configuration interaction calculations through a preconditioned block iterative eigensolver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Meiyue; Aktulga, H.  Metin; Yang, Chao

    In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less

  18. Improvements in surface singularity analysis and design methods. [applicable to airfoils

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1979-01-01

    The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.

  19. Radiofrequency pulse design using nonlinear gradient magnetic fields.

    PubMed

    Kopanoglu, Emre; Constable, R Todd

    2015-09-01

    An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.

  20. A Self-Adapting System for the Automated Detection of Inter-Ictal Epileptiform Discharges

    PubMed Central

    Lodder, Shaun S.; van Putten, Michel J. A. M.

    2014-01-01

    Purpose Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Methods Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form “IED nominations”, each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Key Findings Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20–30 min recordings 1took approximately 5 min. Significance The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents. PMID:24454813

  1. An Iterative Method for Problems with Multiscale Conductivity

    PubMed Central

    Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je

    2012-01-01

    A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238

  2. Solution of the symmetric eigenproblem AX=lambda BX by delayed division

    NASA Technical Reports Server (NTRS)

    Thurston, G. A.; Bains, N. J. C.

    1986-01-01

    Delayed division is an iterative method for solving the linear eigenvalue problem AX = lambda BX for a limited number of small eigenvalues and their corresponding eigenvectors. The distinctive feature of the method is the reduction of the problem to an approximate triangular form by systematically dropping quadratic terms in the eigenvalue lambda. The report describes the pivoting strategy in the reduction and the method for preserving symmetry in submatrices at each reduction step. Along with the approximate triangular reduction, the report extends some techniques used in the method of inverse subspace iteration. Examples are included for problems of varying complexity.

  3. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  4. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    PubMed

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  5. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  6. AZTEC: A parallel iterative package for the solving linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.A.; Shadid, J.N.; Tuminaro, R.S.

    1996-12-31

    We describe a parallel linear system package, AZTEC. The package incorporates a number of parallel iterative methods (e.g. GMRES, biCGSTAB, CGS, TFQMR) and preconditioners (e.g. Jacobi, Gauss-Seidel, polynomial, domain decomposition with LU or ILU within subdomains). Additionally, AZTEC allows for the reuse of previous preconditioning factorizations within Newton schemes for nonlinear methods. Currently, a number of different users are using this package to solve a variety of PDE applications.

  7. Approximate Solution of Time-Fractional Advection-Dispersion Equation via Fractional Variational Iteration Method

    PubMed Central

    İbiş, Birol

    2014-01-01

    This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662

  8. First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems

    DTIC Science & Technology

    2014-03-01

    accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re

  9. Heat Transfer Effects on a Fully Premixed Methane Impinging Flame

    DTIC Science & Technology

    2014-10-30

    Houzeaux et al., 2009). The GM- RES solver is also employed to solve for the enthalpy and species mass fractions. The Gauss - Seidel iterative method is...the system is therefore split to solve the mo- mentum and continuity equations independently. This is achieved by applying an iterative strategy...the momentum equation twice and the continuity equation once. The momentum equation is solved using the GMRES or BICGSTAB method (diagonal and Gauss

  10. Method for beam hardening correction in quantitative computed X-ray tomography

    NASA Technical Reports Server (NTRS)

    Yan, Chye Hwang (Inventor); Whalen, Robert T. (Inventor); Napel, Sandy (Inventor)

    2001-01-01

    Each voxel is assumed to contain exactly two distinct materials, with the volume fraction of each material being iteratively calculated. According to the method, the spectrum of the X-ray beam must be known, and the attenuation spectra of the materials in the object must be known, and be monotonically decreasing with increasing X-ray photon energy. Then, a volume fraction is estimated for the voxel, and the spectrum is iteratively calculated.

  11. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  12. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  13. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  14. ITER-relevant calibration technique for soft x-ray spectrometer.

    PubMed

    Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D

    2010-10-01

    The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.

  15. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  16. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  17. A preconditioner for the finite element computation of incompressible, nonlinear elastic deformations

    NASA Astrophysics Data System (ADS)

    Whiteley, J. P.

    2017-10-01

    Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.

  18. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  19. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    PubMed

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  20. Comparing the basins of attraction for several methods in the circular Sitnikov problem with spheroid primaries

    NASA Astrophysics Data System (ADS)

    Zotos, Euaggelos E.

    2018-06-01

    The circular Sitnikov problem, where the two primary bodies are prolate or oblate spheroids, is numerically investigated. In particular, the basins of convergence on the complex plane are revealed by using a large collection of numerical methods of several order. We consider four cases, regarding the value of the oblateness coefficient which determines the nature of the roots (attractors) of the system. For all cases we use the iterative schemes for performing a thorough and systematic classification of the nodes on the complex plane. The distribution of the iterations as well as the probability and their correlations with the corresponding basins of convergence are also discussed. Our numerical computations indicate that most of the iterative schemes provide relatively similar convergence structures on the complex plane. However, there are some numerical methods for which the corresponding basins of attraction are extremely complicated with highly fractal basin boundaries. Moreover, it is proved that the efficiency strongly varies between the numerical methods.

  1. Solution of an eigenvalue problem for the Laplace operator on a spherical surface. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Walden, H.

    1974-01-01

    Methods for obtaining approximate solutions for the fundamental eigenvalue of the Laplace-Beltrami operator (also referred to as the membrane eigenvalue problem for the vibration equation) on the unit spherical surface are developed. Two specific types of spherical surface domains are considered: (1) the interior of a spherical triangle, i.e., the region bounded by arcs of three great circles, and (2) the exterior of a great circle arc extending for less than pi radians on the sphere (a spherical surface with a slit). In both cases, zero boundary conditions are imposed. In order to solve the resulting second-order elliptic partial differential equations in two independent variables, a finite difference approximation is derived. The symmetric (generally five-point) finite difference equations that develop are written in matrix form and then solved by the iterative method of point successive overrelaxation. Upon convergence of this iterative method, the fundamental eigenvalue is approximated by iteration utilizing the power method as applied to the finite Rayleigh quotient.

  2. Iterative method for in situ measurement of lens aberrations in lithographic tools using CTC-based quadratic aberration model.

    PubMed

    Liu, Shiyuan; Xu, Shuang; Wu, Xiaofei; Liu, Wei

    2012-06-18

    This paper proposes an iterative method for in situ lens aberration measurement in lithographic tools based on a quadratic aberration model (QAM) that is a natural extension of the linear model formed by taking into account interactions among individual Zernike coefficients. By introducing a generalized operator named cross triple correlation (CTC), the quadratic model can be calculated very quickly and accurately with the help of fast Fourier transform (FFT). The Zernike coefficients up to the 37th order or even higher are determined by solving an inverse problem through an iterative procedure from several through-focus aerial images of a specially designed mask pattern. The simulation work has validated the theoretical derivation and confirms that such a method is simple to implement and yields a superior quality of wavefront estimate, particularly for the case when the aberrations are relatively large. It is fully expected that this method will provide a useful practical means for the in-line monitoring of the imaging quality of lithographic tools.

  3. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  4. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  5. Estimation of brood and nest survival: Comparative methods in the presence of heterogeneity

    USGS Publications Warehouse

    Manly, Bryan F.J.; Schmutz, Joel A.

    2001-01-01

    The Mayfield method has been widely used for estimating survival of nests and young animals, especially when data are collected at irregular observation intervals. However, this method assumes survival is constant throughout the study period, which often ignores biologically relevant variation and may lead to biased survival estimates. We examined the bias and accuracy of 1 modification to the Mayfield method that allows for temporal variation in survival, and we developed and similarly tested 2 additional methods. One of these 2 new methods is simply an iterative extension of Klett and Johnson's method, which we refer to as the Iterative Mayfield method and bears similarity to Kaplan-Meier methods. The other method uses maximum likelihood techniques for estimation and is best applied to survival of animals in groups or families, rather than as independent individuals. We also examined how robust these estimators are to heterogeneity in the data, which can arise from such sources as dependent survival probabilities among siblings, inherent differences among families, and adoption. Testing of estimator performance with respect to bias, accuracy, and heterogeneity was done using simulations that mimicked a study of survival of emperor goose (Chen canagica) goslings. Assuming constant survival for inappropriately long periods of time or use of Klett and Johnson's methods resulted in large bias or poor accuracy (often >5% bias or root mean square error) compared to our Iterative Mayfield or maximum likelihood methods. Overall, estimator performance was slightly better with our Iterative Mayfield than our maximum likelihood method, but the maximum likelihood method provides a more rigorous framework for testing covariates and explicity models a heterogeneity factor. We demonstrated use of all estimators with data from emperor goose goslings. We advocate that future studies use the new methods outlined here rather than the traditional Mayfield method or its previous modifications.

  6. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  7. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  8. Source apportionment for fine particulate matter in a Chinese city using an improved gas-constrained method and comparison with multiple receptor models.

    PubMed

    Shi, Guoliang; Liu, Jiayuan; Wang, Haiting; Tian, Yingze; Wen, Jie; Shi, Xurong; Feng, Yinchang; Ivey, Cesunica E; Russell, Armistead G

    2018-02-01

    PM 2.5 is one of the most studied atmospheric pollutants due to its adverse impacts on human health and welfare and the environment. An improved model (the chemical mass balance gas constraint-Iteration: CMBGC-Iteration) is proposed and applied to identify source categories and estimate source contributions of PM 2.5. The CMBGC-Iteration model uses the ratio of gases to PM as constraints and considers the uncertainties of source profiles and receptor datasets, which is crucial information for source apportionment. To apply this model, samples of PM 2.5 were collected at Tianjin, a megacity in northern China. The ambient PM 2.5 dataset, source information, and gas-to-particle ratios (such as SO 2 /PM 2.5 , CO/PM 2.5 , and NOx/PM 2.5 ratios) were introduced into the CMBGC-Iteration to identify the potential sources and their contributions. Six source categories were identified by this model and the order based on their contributions to PM 2.5 was as follows: secondary sources (30%), crustal dust (25%), vehicle exhaust (16%), coal combustion (13%), SOC (7.6%), and cement dust (0.40%). In addition, the same dataset was also calculated by other receptor models (CMB, CMB-Iteration, CMB-GC, PMF, WALSPMF, and NCAPCA), and the results obtained were compared. Ensemble-average source impacts were calculated based on the seven source apportionment results: contributions of secondary sources (28%), crustal dust (20%), coal combustion (18%), vehicle exhaust (17%), SOC (11%), and cement dust (1.3%). The similar results of CMBGC-Iteration and ensemble method indicated that CMBGC-Iteration can produce relatively appropriate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A Block Preconditioned Conjugate Gradient-type Iterative Solver for Linear Systems in Thermal Reservoir Simulation

    NASA Astrophysics Data System (ADS)

    Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond

    1986-11-01

    A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.

  10. Analysis of Eigenvalue and Eigenfunction of Klein Gordon Equation Using Asymptotic Iteration Method for Separable Non-central Cylindrical Potential

    NASA Astrophysics Data System (ADS)

    Suparmi, A.; Cari, C.; Lilis Elviyanti, Isnaini

    2018-04-01

    Analysis of relativistic energy and wave function for zero spin particles using Klein Gordon equation was influenced by separable noncentral cylindrical potential was solved by asymptotic iteration method (AIM). By using cylindrical coordinates, the Klein Gordon equation for the case of symmetry spin was reduced to three one-dimensional Schrodinger like equations that were solvable using variable separation method. The relativistic energy was calculated numerically with Matlab software, and the general unnormalized wave function was expressed in hypergeometric terms.

  11. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  12. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    NASA Astrophysics Data System (ADS)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  13. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  14. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  15. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  16. Multishot cartesian turbo spin-echo diffusion imaging using iterative POCSMUSE Reconstruction.

    PubMed

    Zhang, Zhe; Zhang, Bing; Li, Ming; Liang, Xue; Chen, Xiaodong; Liu, Renyuan; Zhang, Xin; Guo, Hua

    2017-07-01

    To report a diffusion imaging technique insensitive to off-resonance artifacts and motion-induced ghost artifacts using multishot Cartesian turbo spin-echo (TSE) acquisition and iterative POCS-based reconstruction of multiplexed sensitivity encoded magnetic resonance imaging (MRI) (POCSMUSE) for phase correction. Phase insensitive diffusion preparation was used to deal with the violation of the Carr-Purcell-Meiboom-Gill (CPMG) conditions of TSE diffusion-weighted imaging (DWI), followed by a multishot Cartesian TSE readout for data acquisition. An iterative diffusion phase correction method, iterative POCSMUSE, was developed and implemented to eliminate the ghost artifacts in multishot TSE DWI. The in vivo human brain diffusion images (from one healthy volunteer and 10 patients) using multishot Cartesian TSE were acquired at 3T and reconstructed using iterative POCSMUSE, and compared with single-shot and multishot echo-planar imaging (EPI) results. These images were evaluated by two radiologists using visual scores (considering both image quality and distortion levels) from 1 to 5. The proposed iterative POCSMUSE reconstruction was able to correct the ghost artifacts in multishot DWI. The ghost-to-signal ratio of TSE DWI using iterative POCSMUSE (0.0174 ± 0.0024) was significantly (P < 0.0005) smaller than using POCSMUSE (0.0253 ± 0.0040). The image scores of multishot TSE DWI were significantly higher than single-shot (P = 0.004 and 0.006 from two reviewers) and multishot (P = 0.008 and 0.004 from two reviewers) EPI-based methods. The proposed multishot Cartesian TSE DWI using iterative POCSMUSE reconstruction can provide high-quality diffusion images insensitive to motion-induced ghost artifacts and off-resonance related artifacts such as chemical shifts and susceptibility-induced image distortions. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:167-174. © 2016 International Society for Magnetic Resonance in Medicine.

  17. Solution of the within-group multidimensional discrete ordinates transport equations on massively parallel architectures

    NASA Astrophysics Data System (ADS)

    Zerr, Robert Joseph

    2011-12-01

    The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)

  18. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    PubMed

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  19. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  20. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  1. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  2. An approximate block Newton method for coupled iterations of nonlinear solvers: Theory and conjugate heat transfer applications

    NASA Astrophysics Data System (ADS)

    Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.

    2009-12-01

    A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.

  3. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  4. SU-F-I-49: Vendor-Independent, Model-Based Iterative Reconstruction On a Rotating Grid with Coordinate-Descent Optimization for CT Imaging Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Hoffman, J; McNitt-Gray, M

    Purpose: Iterative reconstruction methods show promise for improving image quality and lowering the dose in helical CT. We aim to develop a novel model-based reconstruction method that offers potential for dose reduction with reasonable computation speed and storage requirements for vendor-independent reconstruction from clinical data on a normal desktop computer. Methods: In 2012, Xu proposed reconstructing on rotating slices to exploit helical symmetry and reduce the storage requirements for the CT system matrix. Inspired by this concept, we have developed a novel reconstruction method incorporating the stored-system-matrix approach together with iterative coordinate-descent (ICD) optimization. A penalized-least-squares objective function with amore » quadratic penalty term is solved analytically voxel-by-voxel, sequentially iterating along the axial direction first, followed by the transaxial direction. 8 in-plane (transaxial) neighbors are used for the ICD algorithm. The forward problem is modeled via a unique approach that combines the principle of Joseph’s method with trilinear B-spline interpolation to enable accurate reconstruction with low storage requirements. Iterations are accelerated with multi-CPU OpenMP libraries. For preliminary evaluations, we reconstructed (1) a simulated 3D ellipse phantom and (2) an ACR accreditation phantom dataset exported from a clinical scanner (Definition AS, Siemens Healthcare). Image quality was evaluated in the resolution module. Results: Image quality was excellent for the ellipse phantom. For the ACR phantom, image quality was comparable to clinical reconstructions and reconstructions using open-source FreeCT-wFBP software. Also, we did not observe any deleterious impact associated with the utilization of rotating slices. The system matrix storage requirement was only 4.5GB, and reconstruction time was 50 seconds per iteration. Conclusion: Our reconstruction method shows potential for furthering research in low-dose helical CT, in particular as part of our ongoing development of an acquisition/reconstruction pipeline for generating images under a wide range of conditions. Our algorithm will be made available open-source as “FreeCT-ICD”. NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  5. LETTER TO THE EDITOR: Iteratively-coupled propagating exterior complex scaling method for electron hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor

    2004-02-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.

  6. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  7. An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Sang, Jun; Alam, Mohammad S.

    2013-03-01

    An image hiding method based on cascaded iterative Fourier transform and public-key encryption algorithm was proposed. Firstly, the original secret image was encrypted into two phase-only masks M1 and M2 via cascaded iterative Fourier transform (CIFT) algorithm. Then, the public-key encryption algorithm RSA was adopted to encrypt M2 into M2' . Finally, a host image was enlarged by extending one pixel into 2×2 pixels and each element in M1 and M2' was multiplied with a superimposition coefficient and added to or subtracted from two different elements in the 2×2 pixels of the enlarged host image. To recover the secret image from the stego-image, the two masks were extracted from the stego-image without the original host image. By applying public-key encryption algorithm, the key distribution was facilitated, and also compared with the image hiding method based on optical interference, the proposed method may reach higher robustness by employing the characteristics of the CIFT algorithm. Computer simulations show that this method has good robustness against image processing.

  8. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction

    PubMed Central

    Nikazad, T; Davidi, R; Herman, G. T.

    2013-01-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911

  9. Accelerated perturbation-resilient block-iterative projection methods with application to image reconstruction.

    PubMed

    Nikazad, T; Davidi, R; Herman, G T

    2012-03-01

    We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.

  10. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  11. Improved sensitivity of computed tomography towards iodine and gold nanoparticle contrast agents via iterative reconstruction methods

    PubMed Central

    Bernstein, Ally Leigh; Dhanantwari, Amar; Jurcova, Martina; Cheheltani, Rabee; Naha, Pratap Chandra; Ivanc, Thomas; Shefer, Efrat; Cormode, David Peter

    2016-01-01

    Computed tomography is a widely used medical imaging technique that has high spatial and temporal resolution. Its weakness is its low sensitivity towards contrast media. Iterative reconstruction techniques (ITER) have recently become available, which provide reduced image noise compared with traditional filtered back-projection methods (FBP), which may allow the sensitivity of CT to be improved, however this effect has not been studied in detail. We scanned phantoms containing either an iodine contrast agent or gold nanoparticles. We used a range of tube voltages and currents. We performed reconstruction with FBP, ITER and a novel, iterative, modal-based reconstruction (IMR) algorithm. We found that noise decreased in an algorithm dependent manner (FBP > ITER > IMR) for every scan and that no differences were observed in attenuation rates of the agents. The contrast to noise ratio (CNR) of iodine was highest at 80 kV, whilst the CNR for gold was highest at 140 kV. The CNR of IMR images was almost tenfold higher than that of FBP images. Similar trends were found in dual energy images formed using these algorithms. In conclusion, IMR-based reconstruction techniques will allow contrast agents to be detected with greater sensitivity, and may allow lower contrast agent doses to be used. PMID:27185492

  12. Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation

    NASA Astrophysics Data System (ADS)

    Litaker, Eric T.

    1994-12-01

    The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.

  13. An iterative sinogram gap-filling method with object- and scanner-dedicated discrete cosine transform (DCT)-domain filters for high resolution PET scanners.

    PubMed

    Kim, Kwangdon; Lee, Kisung; Lee, Hakjae; Joo, Sungkwan; Kang, Jungwon

    2018-01-01

    We aimed to develop a gap-filling algorithm, in particular the filter mask design method of the algorithm, which optimizes the filter to the imaging object by an adaptive and iterative process, rather than by manual means. Two numerical phantoms (Shepp-Logan and Jaszczak) were used for sinogram generation. The algorithm works iteratively, not only on the gap-filling iteration but also on the mask generation, to identify the object-dedicated low frequency area in the DCT-domain that is to be preserved. We redefine the low frequency preserving region of the filter mask at every gap-filling iteration, and the region verges on the property of the original image in the DCT domain. The previous DCT2 mask for each phantom case had been manually well optimized, and the results show little difference from the reference image and sinogram. We observed little or no difference between the results of the manually optimized DCT2 algorithm and those of the proposed algorithm. The proposed algorithm works well for various types of scanning object and shows results that compare to those of the manually optimized DCT2 algorithm without perfect or full information of the imaging object.

  14. Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices

    NASA Astrophysics Data System (ADS)

    Polstyanko, Sergey V.; Lee, Jin-Fa

    1998-03-01

    In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.

  15. A diffusion-based truncated projection artifact reduction method for iterative digital breast tomosynthesis reconstruction

    PubMed Central

    Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M

    2014-01-01

    Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346

  16. GPU-accelerated iterative reconstruction from Compton scattered data using a matched pair of conic projector and backprojector.

    PubMed

    Nguyen, Van-Giang; Lee, Soo-Jin

    2016-07-01

    Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Self-consistent field for fragmented quantum mechanical model of large molecular systems.

    PubMed

    Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao

    2016-01-30

    Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.

  18. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  19. A new least-squares transport equation compatible with voids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  20. Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes

    NASA Astrophysics Data System (ADS)

    Haist, Tobias; Tiziani, Hans J.

    1999-03-01

    An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.

  1. Recent advances in Lanczos-based iterative methods for nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Golub, Gene H.; Nachtigal, Noel M.

    1992-01-01

    In recent years, there has been a true revival of the nonsymmetric Lanczos method. On the one hand, the possible breakdowns in the classical algorithm are now better understood, and so-called look-ahead variants of the Lanczos process have been developed, which remedy this problem. On the other hand, various new Lanczos-based iterative schemes for solving nonsymmetric linear systems have been proposed. This paper gives a survey of some of these recent developments.

  2. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  3. An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging

    PubMed Central

    Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.

    2017-01-01

    Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862

  4. An iterative method for airway segmentation using multiscale leakage detection

    NASA Astrophysics Data System (ADS)

    Nadeem, Syed Ahmed; Jin, Dakai; Hoffman, Eric A.; Saha, Punam K.

    2017-02-01

    There are growing applications of quantitative computed tomography for assessment of pulmonary diseases by characterizing lung parenchyma as well as the bronchial tree. Many large multi-center studies incorporating lung imaging as a study component are interested in phenotypes relating airway branching patterns, wall-thickness, and other morphological measures. To our knowledge, there are no fully automated airway tree segmentation methods, free of the need for user review. Even when there are failures in a small fraction of segmentation results, the airway tree masks must be manually reviewed for all results which is laborious considering that several thousands of image data sets are evaluated in large studies. In this paper, we present a CT-based novel airway tree segmentation algorithm using iterative multi-scale leakage detection, freezing, and active seed detection. The method is fully automated requiring no manual inputs or post-segmentation editing. It uses simple intensity based connectivity and a new leakage detection algorithm to iteratively grow an airway tree starting from an initial seed inside the trachea. It begins with a conservative threshold and then, iteratively shifts toward generous values. The method was applied on chest CT scans of ten non-smoking subjects at total lung capacity and ten at functional residual capacity. Airway segmentation results were compared to an expert's manually edited segmentations. Branch level accuracy of the new segmentation method was examined along five standardized segmental airway paths (RB1, RB4, RB10, LB1, LB10) and two generations beyond these branches. The method successfully detected all branches up to two generations beyond these segmental bronchi with no visual leakages.

  5. Outlier detection for particle image velocimetry data using a locally estimated noise variance

    NASA Astrophysics Data System (ADS)

    Lee, Yong; Yang, Hua; Yin, ZhouPing

    2017-03-01

    This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.

  6. A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.

    PubMed

    Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing

    2007-01-01

    Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.

  7. Analytic approximation for random muffin-tin alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, R.; Gray, L.J.; Kaplan, T.

    1983-03-15

    The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less

  8. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithmmore » is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.« less

  9. Real-Time Nonlocal Means-Based Despeckling.

    PubMed

    Breivik, Lars Hofsoy; Snare, Sten Roar; Steen, Erik Normann; Solberg, Anne H Schistad

    2017-06-01

    In this paper, we propose a multiscale nonlocal means-based despeckling method for medical ultrasound. The multiscale approach leads to large computational savings and improves despeckling results over single-scale iterative approaches. We present two variants of the method. The first, denoted multiscale nonlocal means (MNLM), yields uniform robust filtering of speckle both in structured and homogeneous regions. The second, denoted unnormalized MNLM (UMNLM), is more conservative in regions of structure assuring minimal disruption of salient image details. Due to the popularity of anisotropic diffusion-based methods in the despeckling literature, we review the connection between anisotropic diffusion and iterative variants of NLM. These iterative variants in turn relate to our multiscale variant. As part of our evaluation, we conduct a simulation study making use of ground truth phantoms generated from clinical B-mode ultrasound images. We evaluate our method against a set of popular methods from the despeckling literature on both fine and coarse speckle noise. In terms of computational efficiency, our method outperforms the other considered methods. Quantitatively on simulations and on a tissue-mimicking phantom, our method is found to be competitive with the state-of-the-art. On clinical B-mode images, our method is found to effectively smooth speckle while preserving low-contrast and highly localized salient image detail.

  10. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  11. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  12. Iterative methods for dose reduction and image enhancement in tomography

    DOEpatents

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  13. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  14. An Iterative Method for Restoring Noisy Blurred Images.

    DTIC Science & Technology

    1984-01-01

    the gafa Is oomputed sing a linear WM Introduced in (2). The Iterative toeigme opimuai neem is updted at sub- hwver , do have ahvatagen when - , -d...condition lot me- that oil Aa) can be written an genes is never satisfied, thus reblurring is always necessary. nore generally, convergence is k1k asrdfo vr

  15. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  16. A novel Iterative algorithm to text segmentation for web born-digital images

    NASA Astrophysics Data System (ADS)

    Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen

    2015-07-01

    Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.

  17. LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.

    PubMed

    Liu, Wanli

    2017-03-08

    The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.

  18. Reliable recovery of the optical properties of multi-layer turbid media by iteratively using a layered diffusion model at multiple source-detector separations

    PubMed Central

    Liao, Yu-Kai; Tseng, Sheng-Hao

    2014-01-01

    Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828

  19. Implementation of the pyramid wavefront sensor as a direct phase detector for large amplitude aberrations

    NASA Astrophysics Data System (ADS)

    Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc

    2008-07-01

    We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.

  20. Quantitative evaluation of ASiR image quality: an adaptive statistical iterative reconstruction technique

    NASA Astrophysics Data System (ADS)

    Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan

    2012-03-01

    Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.

  1. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Richardson-Lucy/maximum likelihood image restoration algorithm for fluorescence microscopy: further testing.

    PubMed

    Holmes, T J; Liu, Y H

    1989-11-15

    A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.

  3. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  4. Radiative transfer in molecular lines

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Trujillo Bueno, J.; Cernicharo, J.

    2001-07-01

    The highly convergent iterative methods developed by Trujillo Bueno and Fabiani Bendicho (1995) for radiative transfer (RT) applications are generalized to spherical symmetry with velocity fields. These RT methods are based on Jacobi, Gauss-Seidel (GS), and SOR iteration and they form the basis of a new NLTE multilevel transfer code for atomic and molecular lines. The benchmark tests carried out so far are presented and discussed. The main aim is to develop a number of powerful RT tools for the theoretical interpretation of molecular spectra.

  5. Absolutely and uniformly convergent iterative approach to inverse scattering with an infinite radius of convergence

    DOEpatents

    Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA

    2007-05-01

    A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.

  6. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  7. Pediatric faculty and residents’ perspectives on In-Training Evaluation Reports (ITERs)

    PubMed Central

    Patel, Rikin; Drover, Anne; Chafe, Roger

    2015-01-01

    Background In-training evaluation reports (ITERs) are used by over 90% of postgraduate medical training programs in Canada for resident assessment. Our study examined the perspectives of faculty and residents in one pediatric program as a means to improve the ITER as an evaluation tool. Method Two separate focus groups were conducted, one with eight pediatric residents and one with nine clinical faculty within the pediatrics program of Memorial University’s Faculty of Medicine to discuss their perceptions of, and suggestions for improving, the use of ITERs. Results Residents and faculty shared many similar suggestions for improving the ITER as an evaluation tool. Both the faculty and residents emphasized the importance of written feedback, contextualizing the evaluation and timely follow-up. The biggest challenge appears to be the discrepancy in the quality of feedback sought by the residents and the faculty members’ ability to do so in a time effective manner. Others concerns related to the need for better engagement in setting rotation objectives and more direct observation by the faculty member completing the ITER. Conclusions The ITER is a useful tool in resident evaluations, but a number of issues relating to its actual use could improve the quality of feedback which residents receive. PMID:27004076

  8. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  9. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  10. On the Convergence of an Implicitly Restarted Arnoldi Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.

    We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.

  11. Augmenting the one-shot framework by additional constraints

    DOE PAGES

    Bosse, Torsten

    2016-05-12

    The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.

  12. Augmenting the one-shot framework by additional constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosse, Torsten

    The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.

  13. Fast and secure encryption-decryption method based on chaotic dynamics

    DOEpatents

    Protopopescu, Vladimir A.; Santoro, Robert T.; Tolliver, Johnny S.

    1995-01-01

    A method and system for the secure encryption of information. The method comprises the steps of dividing a message of length L into its character components; generating m chaotic iterates from m independent chaotic maps; producing an "initial" value based upon the m chaotic iterates; transforming the "initial" value to create a pseudo-random integer; repeating the steps of generating, producing and transforming until a pseudo-random integer sequence of length L is created; and encrypting the message as ciphertext based upon the pseudo random integer sequence. A system for accomplishing the invention is also provided.

  14. Fast multilevel radiative transfer

    NASA Astrophysics Data System (ADS)

    Paletou, Frédéric; Léger, Ludovick

    2007-01-01

    The vast majority of recent advances in the field of numerical radiative transfer relies on approximate operator methods better known in astrophysics as Accelerated Lambda-Iteration (ALI). A superior class of iterative schemes, in term of rates of convergence, such as Gauss-Seidel and Successive Overrelaxation methods were therefore quite naturally introduced in the field of radiative transfer by Trujillo Bueno & Fabiani Bendicho (1995); it was thoroughly described for the non-LTE two-level atom case. We describe hereafter in details how such methods can be generalized when dealing with non-LTE unpolarised radiation transfer with multilevel atomic models, in monodimensional geometry.

  15. Non-oscillatory and non-diffusive solution of convection problems by the iteratively reweighted least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1993-01-01

    A comparative description is presented for the least-squares FEM (LSFEM) for 2D steady-state pure convection problems. In addition to exhibiting better control of the streamline derivative than the streamline upwinding Petrov-Galerkin method, numerical convergence rates are obtained which show the LSFEM to be virtually optimal. The LSFEM is used as a framework for an iteratively reweighted LSFEM yielding nonoscillatory and nondiffusive solutions for problems with contact discontinuities; this method is shown to convect contact discontinuities without error when using triangular and bilinear elements.

  16. Simultaneous imaging/reflectivity measurements to assess diagnostic mirror cleaning.

    PubMed

    Skinner, C H; Gentile, C A; Doerner, R

    2012-10-01

    Practical methods to clean ITER's diagnostic mirrors and restore reflectivity will be critical to ITER's plasma operations. We describe a technique to assess the efficacy of mirror cleaning techniques and detect any damage to the mirror surface. The method combines microscopic imaging and reflectivity measurements in the red, green, and blue spectral regions and at selected wavelengths. The method has been applied to laser cleaning of single crystal molybdenum mirrors coated with either carbon or beryllium films 150-420 nm thick. It is suitable for hazardous materials such as beryllium as the mirrors remain sealed in a vacuum chamber.

  17. Novel aspects of plasma control in ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humphreys, D.; Jackson, G.; Walker, M.

    2015-02-15

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  18. Novel aspects of plasma control in ITER

    DOE PAGES

    Humphreys, David; Ambrosino, G.; de Vries, Peter; ...

    2015-02-12

    ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less

  19. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  20. Computationally efficient finite-difference modal method for the solution of Maxwell's equations.

    PubMed

    Semenikhin, Igor; Zanuccoli, Mauro

    2013-12-01

    In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.

  1. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.

  2. Experiment of low resistance joints for the ITER correction coil.

    PubMed

    Liu, Huajun; Wu, Yu; Wu, Weiyue; Liu, Bo; Shi, Yi; Guo, Shuai

    2013-01-01

    A test method was designed and performed to measure joint resistance of the ITER correction coil (CC) in liquid helium (LHe) temperature. A 10 kA superconducting transformer was manufactured to provide the joints current. The transformer consisted of two concentric layer-wound superconducting solenoids. NbTi superconducting wire was wound in the primary coil and the ITER CC conductor was wound in the secondary coil. The primary and the secondary coils were both immersed in liquid helium of a 300 mm useful bore diameter cryostat. Two ITER CC joints were assembled in the secondary loop and tested. The current of the secondary loop was ramped to 9 kA in several steps. The two joint resistances were measured to be 1.2 nΩ and 1.65 nΩ, respectively.

  3. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  4. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  5. Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.

  6. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  7. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  8. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  10. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    PubMed

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2017-03-01

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  12. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  13. Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures.

    PubMed

    Loss, Leandro A; Bebis, George; Parvin, Bahram

    2011-08-01

    In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.

  14. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  15. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  16. Generalized conjugate-gradient methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1991-01-01

    A generalized conjugate-gradient method is used to solve the two-dimensional, compressible Navier-Stokes equations of fluid flow. The equations are discretized with an implicit, upwind finite-volume formulation. Preconditioning techniques are incorporated into the new solver to accelerate convergence of the overall iterative method. The superiority of the new solver is demonstrated by comparisons with a conventional line Gauss-Siedel Relaxation solver. Computational test results for transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (M = 6.0 shock-on-shock phenoena on a cylindrical leading edge) are presented. When applied to the transonic cascade case, the new solver is 4.4 times faster in terms of number of iterations and 3.1 times faster in terms of CPU time than the Relaxation solver. For the hypersonic shock case, the new solver is 3.0 times faster in terms of number of iterations and 2.2 times faster in terms of CPU time than the Relaxation solver.

  17. An efficient algorithm using matrix methods to solve wind tunnel force-balance equations

    NASA Technical Reports Server (NTRS)

    Smith, D. L.

    1972-01-01

    An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.

  18. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    NASA Astrophysics Data System (ADS)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong

    2016-12-01

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.

  19. Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo

    2018-04-01

    In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.

  20. A block iterative finite element algorithm for numerical solution of the steady-state, compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1976-01-01

    An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.

  1. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  2. Region of interest processing for iterative reconstruction in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.

    2015-03-01

    The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.

  3. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less

  4. Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Li, L.; Cao, Z.; Zhou, H.

    2017-12-01

    Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.

  5. Accelerating NLTE radiative transfer by means of the Forth-and-Back Implicit Lambda Iteration: A two-level atom line formation in 2D Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Milić, Ivan; Atanacković, Olga

    2014-10-01

    State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.

  6. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  7. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  8. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  9. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  10. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  11. Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.

    PubMed

    Germino, Mary; Carson, Richard E

    2018-02-01

    Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.

  12. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  13. Incorporating Prototyping and Iteration into Intervention Development: A Case Study of a Dining Hall-Based Intervention

    ERIC Educational Resources Information Center

    McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.

    2013-01-01

    Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…

  14. Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot

    ERIC Educational Resources Information Center

    Magis, David; Facon, Bruno

    2013-01-01

    Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…

  15. Iterative Ellipsoidal Trimming.

    DTIC Science & Technology

    1980-02-11

    to above. Iterative ellipsoidal trimming has been investigated before by other statisticians, most notably by Gnanadesikan and his coworkers...J., Gnanadesikan R., and Kettenring, J. R. (1975). "Robust estimation and outlier detection with correlation coefficients." Biometrika. 62, 531-45. [6...Duda, Richard, and Hart, Peter (1973). Pattern Classification and Scene Analysis. Wiley, New York. [7] Gnanadesikan , R. (1977). Methods for

  16. WE-EF-207-07: Dual Energy CT with One Full Scan and a Second Sparse-View Scan Using Structure Preserving Iterative Reconstruction (SPIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, T; Zhu, L

    Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less

  17. CGHnormaliter: an iterative strategy to enhance normalization of array CGH data with imbalanced aberrations

    PubMed Central

    van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap

    2009-01-01

    Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427

  18. An improved 2D MoF method by using high order derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2017-11-01

    The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.

  19. The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Xu, X.; Tong, S.; Wang, L.

    2017-12-01

    How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.

  20. Inversion of potential field data using the finite element method on parallel computers

    NASA Astrophysics Data System (ADS)

    Gross, L.; Altinay, C.; Shaw, S.

    2015-11-01

    In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.

  1. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    PubMed

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  2. Algorithms for the optimization of RBE-weighted dose in particle therapy

    NASA Astrophysics Data System (ADS)

    Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.

    2013-01-01

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  3. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  4. A new method for computation of eigenvector derivatives with distinct and repeated eigenvalues in structural dynamic analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhengguang; Lai, Siu-Kai; Wu, Baisheng

    2018-07-01

    Determining eigenvector derivatives is a challenging task due to the singularity of the coefficient matrices of the governing equations, especially for those structural dynamic systems with repeated eigenvalues. An effective strategy is proposed to construct a non-singular coefficient matrix, which can be directly used to obtain the eigenvector derivatives with distinct and repeated eigenvalues. This approach also has an advantage that only requires eigenvalues and eigenvectors of interest, without solving the particular solutions of eigenvector derivatives. The Symmetric Quasi-Minimal Residual (SQMR) method is then adopted to solve the governing equations, only the existing factored (shifted) stiffness matrix from an iterative eigensolution such as the subspace iteration method or the Lanczos algorithm is utilized. The present method can deal with both cases of simple and repeated eigenvalues in a unified manner. Three numerical examples are given to illustrate the accuracy and validity of the proposed algorithm. Highly accurate approximations to the eigenvector derivatives are obtained within a few iteration steps, making a significant reduction of the computational effort. This method can be incorporated into a coupled eigensolver/derivative software module. In particular, it is applicable for finite element models with large sparse matrices.

  5. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  6. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  7. An Analytical Investigation of Three General Methods of Calculating Chemical-Equilibrium Compositions

    NASA Technical Reports Server (NTRS)

    Zeleznik, Frank J.; Gordon, Sanford

    1960-01-01

    The Brinkley, Huff, and White methods for chemical-equilibrium calculations were modified and extended in order to permit an analytical comparison. The extended forms of these methods permit condensed species as reaction products, include temperature as a variable in the iteration, and permit arbitrary estimates for the variables. It is analytically shown that the three extended methods can be placed in a form that is independent of components. In this form the Brinkley iteration is identical computationally to the White method, while the modified Huff method differs only'slightly from these two. The convergence rates of the modified Brinkley and White methods are identical; and, further, all three methods are guaranteed to converge and will ultimately converge quadratically. It is concluded that no one of the three methods offers any significant computational advantages over the other two.

  8. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  9. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  10. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  11. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  12. Fast algorithm for spectral mixture analysis of imaging spectrometer data

    NASA Astrophysics Data System (ADS)

    Schouten, Theo E.; Klein Gebbinck, Maurice S.; Liu, Z. K.; Chen, Shaowei

    1996-12-01

    Imaging spectrometers acquire images in many narrow spectral bands but have limited spatial resolution. Spectral mixture analysis (SMA) is used to determine the fractions of the ground cover categories (the end-members) present in each pixel. In this paper a new iterative SMA method is presented and tested using a 30 band MAIS image. The time needed for each iteration is independent of the number of bands, thus the method can be used for spectrometers with a large number of bands. Further a new method, based on K-means clustering, for obtaining endmembers from image data is described and compared with existing methods. Using the developed methods the available MAIS image was analyzed using 2 to 6 endmembers.

  13. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  14. Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.

    PubMed

    Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M

    2015-10-01

    Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.

  15. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  16. On the electromagnetic scattering from infinite rectangular grids with finite conductivity

    NASA Technical Reports Server (NTRS)

    Christodoulou, C. G.; Kauffman, J. F.

    1986-01-01

    A variety of methods can be used in constructing solutions to the problem of mesh scattering. However, each of these methods has certain drawbacks. The present paper is concerned with a new technique which is valid for all spacings. The new method involved, called the fast Fourier transform-conjugate gradient method (FFT-CGM), represents an iterative technique which employs the conjugate gradient method to improve upon each iterate, utilizing the fast Fourier transform. The FFT-CGM method provides a new accurate model which can be extended and applied to the more difficult problems of woven mesh surfaces. The formulation of the FFT-conjugate gradient method for aperture fields and current densities for a planar periodic structure is considered along with singular operators, the formulation of the FFT-CG method for thin wires with finite conductivity, and reflection coefficients.

  17. The motional Stark effect diagnostic for ITER using a line-shift approach.

    PubMed

    Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E

    2008-10-01

    The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.

  18. Applicability of the iterative technique for cardiac resynchronization therapy optimization: full-disclosure, 50-sequential-patient dataset of transmitral Doppler traces, with implications for future research design and guidelines.

    PubMed

    Jones, Siana; Shun-Shin, Matthew J; Cole, Graham D; Sau, Arunashis; March, Katherine; Williams, Suzanne; Kyriacou, Andreas; Hughes, Alun D; Mayet, Jamil; Frenneaux, Michael; Manisty, Charlotte H; Whinnett, Zachary I; Francis, Darrel P

    2014-04-01

    Full-disclosure study describing Doppler patterns during iterative atrioventricular delay (AVD) optimization of biventricular pacemakers (cardiac resynchronization therapy, CRT). Doppler traces of the first 50 eligible patients undergoing iterative Doppler AVD optimization in the BRAVO trial were examined. Three experienced observers classified conformity to guideline-described patterns. Each observer then selected the optimum AVD on two separate occasions: blinded and unblinded to AVD. Four Doppler E-A patterns occurred: A (always merged, 18% of patients), B (incrementally less fusion at short AVDs, 12%), C (full separation at short AVDs, as described by the guidelines, 28%), and D (always separated, 42%). In Groups A and D (60%), the iterative guidelines therefore cannot specify one single AVD. On the kappa scale (0 = chance alone; 1 = perfect agreement), observer agreement for the ideal AVD in Classes B and C was poor (0.32) and appeared worse in Groups A and D (0.22). Blinding caused the scattering of the AVD selected as optimal to widen (standard deviation rising from 37 to 49 ms, P < 0.001). By blinding 28% of the selected optimum AVDs were ≤60 or ≥200 ms. All 50 Doppler datasets are presented, to support future methodological testing. In most patients, the iterative method does not clearly specify one AVD. In all the patients, agreement on the ideal AVD between skilled observers viewing identical images is poor. The iterative protocol may successfully exclude some extremely unsuitable AVDs, but so might simply accepting factory default. Irreproducibility of the gold standard also prevents alternative physiological optimization methods from being validated honestly.

  19. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  20. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  1. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    PubMed

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  2. Prospective ECG-Triggered Coronary CT Angiography: Clinical Value of Noise-Based Tube Current Reduction Method with Iterative Reconstruction

    PubMed Central

    Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng

    2013-01-01

    Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444

  3. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  4. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  5. Iterative wave-front reconstruction in the Fourier domain.

    PubMed

    Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry

    2017-05-15

    The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.

  6. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  7. A polynomial primal-dual Dikin-type algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, B.; Roos, R.; Terlaky, T.

    1994-12-31

    We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.

  8. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  9. 3D algebraic iterative reconstruction for cone-beam x-ray differential phase-contrast computed tomography.

    PubMed

    Fu, Jian; Hu, Xinhua; Velroyen, Astrid; Bech, Martin; Jiang, Ming; Pfeiffer, Franz

    2015-01-01

    Due to the potential of compact imaging systems with magnified spatial resolution and contrast, cone-beam x-ray differential phase-contrast computed tomography (DPC-CT) has attracted significant interest. The current proposed FDK reconstruction algorithm with the Hilbert imaginary filter will induce severe cone-beam artifacts when the cone-beam angle becomes large. In this paper, we propose an algebraic iterative reconstruction (AIR) method for cone-beam DPC-CT and report its experiment results. This approach considers the reconstruction process as the optimization of a discrete representation of the object function to satisfy a system of equations that describes the cone-beam DPC-CT imaging modality. Unlike the conventional iterative algorithms for absorption-based CT, it involves the derivative operation to the forward projections of the reconstructed intermediate image to take into account the differential nature of the DPC projections. This method is based on the algebraic reconstruction technique, reconstructs the image ray by ray, and is expected to provide better derivative estimates in iterations. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a mini-focus x-ray tube source. It is shown that the proposed method can reduce the cone-beam artifacts and performs better than FDK under large cone-beam angles. This algorithm is of interest for future cone-beam DPC-CT applications.

  10. A biological phantom for evaluation of CT image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Cammin, J.; Fung, G. S. K.; Fishman, E. K.; Siewerdsen, J. H.; Stayman, J. W.; Taguchi, K.

    2014-03-01

    In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the performance on clinical images. The discrepancy between estimated and real performance may be even larger for iterative methods which sometimes produce "plastic-like", patchy images with homogeneous patterns. Consequently, more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection and an iterative reconstruction algorithm.

  11. Orthobiologics in the Foot and Ankle.

    PubMed

    Temple, H Thomas; Malinin, Theodore I

    2016-12-01

    Many allogeneic biologic materials, by themselves or in combination with cells or cell products, may be transformative in healing or regeneration of musculoskeletal bone and soft tissues. By reconfiguring the size, shape, and methods of tissue preparation to improve deliverability and storage, unique iterations of traditional tissue scaffolds have emerged. These new iterations, combined with new cell technologies, have shaped an exciting platform of regenerative products that are effective and provide a bridge to newer and better methods of providing care for orthopedic foot and ankle patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Pao-Keng

    2011-09-01

    We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.

  13. Gaussian-Beam/Physical-Optics Design Of Beam Waveguide

    NASA Technical Reports Server (NTRS)

    Veruttipong, Watt; Chen, Jacqueline C.; Bathker, Dan A.

    1993-01-01

    In iterative method of designing wideband beam-waveguide feed for paraboloidal-reflector antenna, Gaussian-beam approximation alternated with more nearly exact physical-optics analysis of diffraction. Includes curved and straight reflectors guiding radiation from feed horn to subreflector. For iterative design calculations, curved mirrors mathematically modeled as thin lenses. Each distance Li is combined length of two straight-line segments intersecting at one of flat mirrors. Method useful for designing beam-waveguide reflectors or mirrors required to have diameters approximately less than 30 wavelengths at one or more intended operating frequencies.

  14. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  15. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  16. A Mixed Methods Bounded Case Study: Data-Driven Decision Making within Professional Learning Communities for Response to Intervention

    ERIC Educational Resources Information Center

    Rodriguez, Gabriel R.

    2017-01-01

    A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…

  17. A new iterative scheme for solving the discrete Smoluchowski equation

    NASA Astrophysics Data System (ADS)

    Smith, Alastair J.; Wells, Clive G.; Kraft, Markus

    2018-01-01

    This paper introduces a new iterative scheme for solving the discrete Smoluchowski equation and explores the numerical convergence properties of the method for a range of kernels admitting analytical solutions, in addition to some more physically realistic kernels typically used in kinetics applications. The solver is extended to spatially dependent problems with non-uniform velocities and its performance investigated in detail.

  18. An iterative transformation procedure for numerical solution of flutter and similar characteristics-value problems

    NASA Technical Reports Server (NTRS)

    Gossard, Myron L

    1952-01-01

    An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.

  19. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  20. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

Top