A technique for accelerating the convergence of restarted GMRES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Manteuffel, T
2004-03-09
We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.
Spectrum transformation for divergent iterations
NASA Technical Reports Server (NTRS)
Gupta, Murli M.
1991-01-01
Certain spectrum transformation techniques are described that can be used to transform a diverging iteration into a converging one. Two techniques are considered called spectrum scaling and spectrum enveloping and how to obtain the optimum values of the transformation parameters is discussed. Numerical examples are given to show how this technique can be used to transform diverging iterations into converging ones; this technique can also be used to accelerate the convergence of otherwise convergent iterations.
Convergence acceleration of molecular dynamics methods for shocked materials using velocity scaling
NASA Astrophysics Data System (ADS)
Taylor, DeCarlos E.
2017-03-01
In this work, a convergence acceleration method applicable to extended system molecular dynamics techniques for shock simulations of materials is presented. The method uses velocity scaling to reduce the instantaneous value of the Rankine-Hugoniot conservation of energy constraint used in extended system molecular dynamics methods to more rapidly drive the system towards a converged Hugoniot state. When used in conjunction with the constant stress Hugoniostat method, the velocity scaled trajectories show faster convergence to the final Hugoniot state with little difference observed in the converged Hugoniot energy, pressure, volume and temperature. A derivation of the scale factor is presented and the performance of the technique is demonstrated using the boron carbide armour ceramic as a test material. It is shown that simulation of boron carbide Hugoniot states, from 5 to 20 GPa, using both a classical Tersoff potential and an ab initio density functional, are more rapidly convergent when the velocity scaling algorithm is applied. The accelerated convergence afforded by the current algorithm enables more rapid determination of Hugoniot states thus reducing the computational demand of such studies when using expensive ab initio or classical potentials.
Convergence acceleration of viscous flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1982-01-01
A multiple-grid convergence acceleration technique introduced for application to the solution of the Euler equations by means of Lax-Wendroff algorithms is extended to treat compressible viscous flow. Computational results are presented for the solution of the thin-layer version of the Navier-Stokes equations using the explicit MacCormack algorithm, accelerated by a convective coarse-grid scheme. Extensions and generalizations are mentioned.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
On Convergence Acceleration Techniques for Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1998-01-01
A discussion of convergence acceleration techniques as they relate to computational fluid dynamics problems on unstructured meshes is given. Rather than providing a detailed description of particular methods, the various different building blocks of current solution techniques are discussed and examples of solution strategies using one or several of these ideas are given. Issues relating to unstructured grid CFD problems are given additional consideration, including suitability of algorithms to current hardware trends, memory and cpu tradeoffs, treatment of non-linearities, and the development of efficient strategies for handling anisotropy-induced stiffness. The outlook for future potential improvements is also discussed.
NASA Astrophysics Data System (ADS)
Eftekhar, Roya; Hu, Hao; Zheng, Yingcai
2018-06-01
Iterative solution process is fundamental in seismic inversions, such as in full-waveform inversions and some inverse scattering methods. However, the convergence could be slow or even divergent depending on the initial model used in the iteration. We propose to apply Shanks transformation (ST for short) to accelerate the convergence of the iterative solution. ST is a local nonlinear transformation, which transforms a series locally into another series with an improved convergence property. ST works by separating the series into a smooth background trend called the secular term versus an oscillatory transient term. ST then accelerates the convergence of the secular term. Since the transformation is local, we do not need to know all the terms in the original series which is very important in the numerical implementation. The ST performance was tested numerically for both the forward Born series and the inverse scattering series (ISS). The ST has been shown to accelerate the convergence in several examples, including three examples of forward modeling using the Born series and two examples of velocity inversion based on a particular type of the ISS. We observe that ST is effective in accelerating the convergence and it can also achieve convergence even for a weakly divergent scattering series. As such, it provides a useful technique to invert for a large-contrast medium perturbation in seismic inversion.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1992-01-01
Presented here is the first part of a study to implement convergence acceleration techniques based on the multigrid concept in the Proteus computer code. A review is given of previous studies on the implementation of multigrid methods in computer codes for compressible flow analysis. Also presented is a detailed stability analysis of upwind and central-difference based numerical schemes for solving the Euler and Navier-Stokes equations. Results are given of a convergence study of the Proteus code on computational grids of different sizes. The results presented here form the foundation for the implementation of multigrid methods in the Proteus code.
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
Parameter investigation with line-implicit lower-upper symmetric Gauss-Seidel on 3D stretched grids
NASA Astrophysics Data System (ADS)
Otero, Evelyn; Eliasson, Peter
2015-03-01
An implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver has been implemented as a multigrid smoother combined with a line-implicit method as an acceleration technique for Reynolds-averaged Navier-Stokes (RANS) simulation on stretched meshes. The computational fluid dynamics code concerned is Edge, an edge-based finite volume Navier-Stokes flow solver for structured and unstructured grids. The paper focuses on the investigation of the parameters related to our novel line-implicit LU-SGS solver for convergence acceleration on 3D RANS meshes. The LU-SGS parameters are defined as the Courant-Friedrichs-Lewy number, the left-hand side dissipation, and the convergence of iterative solution of the linear problem arising from the linearisation of the implicit scheme. The influence of these parameters on the overall convergence is presented and default values are defined for maximum convergence acceleration. The optimised settings are applied to 3D RANS computations for comparison with explicit and line-implicit Runge-Kutta smoothing. For most of the cases, a computing time acceleration of the order of 2 is found depending on the mesh type, namely the boundary layer and the magnitude of residual reduction.
Parallel performance investigations of an unstructured mesh Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
2000-01-01
A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.
Scaled Heavy-Ball Acceleration of the Richardson-Lucy Algorithm for 3D Microscopy Image Restoration.
Wang, Hongbin; Miller, Paul C
2014-02-01
The Richardson-Lucy algorithm is one of the most important in image deconvolution. However, a drawback is its slow convergence. A significant acceleration was obtained using the technique proposed by Biggs and Andrews (BA), which is implemented in the deconvlucy function of the image processing MATLAB toolbox. The BA method was developed heuristically with no proof of convergence. In this paper, we introduce the heavy-ball (H-B) method for Poisson data optimization and extend it to a scaled H-B method, which includes the BA method as a special case. The method has a proof of the convergence rate of O(K(-2)), where k is the number of iterations. We demonstrate the superior convergence performance, by a speedup factor of five, of the scaled H-B method on both synthetic and real 3D images.
FBILI method for multi-level line transfer
NASA Astrophysics Data System (ADS)
Kuzmanovska, O.; Atanacković, O.; Faurobert, M.
2017-07-01
Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.
Algorithms for the Euler and Navier-Stokes equations for supercomputers
NASA Technical Reports Server (NTRS)
Turkel, E.
1985-01-01
The steady state Euler and Navier-Stokes equations are considered for both compressible and incompressible flow. Methods are found for accelerating the convergence to a steady state. This acceleration is based on preconditioning the system so that it is no longer time consistent. In order that the acceleration technique be scheme-independent, this preconditioning is done at the differential equation level. Applications are presented for very slow flows and also for the incompressible equations.
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes
NASA Technical Reports Server (NTRS)
Movriplis, Dimitri J.
1998-01-01
Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cheng, D. Y.
1971-01-01
Converging, coaxial accelerator electrode configuration operates in vacuum as plasma gun. Plasma forms by periodic injections of high pressure gas that is ionized by electrical discharges. Deflagration mode of discharge provides acceleration, and converging contours of plasma gun provide focusing.
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling
NASA Astrophysics Data System (ADS)
Awasthi, Shalini; Nair, Nisanth N.
2017-03-01
Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
High-order solution methods for grey discrete ordinates thermal radiative transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G., E-mail: maginot1@llnl.gov; Ragusa, Jean C., E-mail: jean.ragusa@tamu.edu; Morel, Jim E., E-mail: morel@tamu.edu
This work presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less
High-order solution methods for grey discrete ordinates thermal radiative transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less
High-order solution methods for grey discrete ordinates thermal radiative transfer
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
2016-09-29
This paper presents a solution methodology for solving the grey radiative transfer equations that is both spatially and temporally more accurate than the canonical radiative transfer solution technique of linear discontinuous finite element discretization in space with implicit Euler integration in time. We solve the grey radiative transfer equations by fully converging the nonlinear temperature dependence of the material specific heat, material opacities, and Planck function. The grey radiative transfer equations are discretized in space using arbitrary-order self-lumping discontinuous finite elements and integrated in time with arbitrary-order diagonally implicit Runge–Kutta time integration techniques. Iterative convergence of the radiation equation ismore » accelerated using a modified interior penalty diffusion operator to precondition the full discrete ordinates transport operator.« less
Theory, simulation and experiments for precise deflection control of radiotherapy electron beams.
Figueroa, R; Leiva, J; Moncada, R; Rojas, L; Santibáñez, M; Valente, M; Velásquez, J; Young, H; Zelada, G; Yáñez, R; Guillen, Y
2018-03-08
Conventional radiotherapy is mainly applied by linear accelerators. Although linear accelerators provide dual (electron/photon) radiation beam modalities, both of them are intrinsically produced by a megavoltage electron current. Modern radiotherapy treatment techniques are based on suitable devices inserted or attached to conventional linear accelerators. Thus, precise control of delivered beam becomes a main key issue. This work presents an integral description of electron beam deflection control as required for novel radiotherapy technique based on convergent photon beam production. Theoretical and Monte Carlo approaches were initially used for designing and optimizing device´s components. Then, dedicated instrumentation was developed for experimental verification of electron beam deflection due to the designed magnets. Both Monte Carlo simulations and experimental results support the reliability of electrodynamics models used to predict megavoltage electron beam control. Copyright © 2018 Elsevier Ltd. All rights reserved.
Convergence acceleration of the Proteus computer code with multigrid methods
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1995-01-01
This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
Acceleration of Convergence to Equilibrium in Markov Chains by Breaking Detailed Balance
NASA Astrophysics Data System (ADS)
Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes
2017-07-01
We analyse and interpret the effects of breaking detailed balance on the convergence to equilibrium of conservative interacting particle systems and their hydrodynamic scaling limits. For finite systems of interacting particles, we review existing results showing that irreversible processes converge faster to their steady state than reversible ones. We show how this behaviour appears in the hydrodynamic limit of such processes, as described by macroscopic fluctuation theory, and we provide a quantitative expression for the acceleration of convergence in this setting. We give a geometrical interpretation of this acceleration, in terms of currents that are antisymmetric under time-reversal and orthogonal to the free energy gradient, which act to drive the system away from states where (reversible) gradient-descent dynamics result in slow convergence to equilibrium.
Roe, Daniel R; Bergonzo, Christina; Cheatham, Thomas E
2014-04-03
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency.
Evaluation of new techniques for the calculation of internal recirculating flows
NASA Technical Reports Server (NTRS)
Van Doormaal, J. P.; Turan, A.; Raithby, G. D.
1987-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Convolutional Dictionary Learning: Acceleration and Convergence
NASA Astrophysics Data System (ADS)
Chun, Il Yong; Fessler, Jeffrey A.
2018-04-01
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willert, Jeffrey; Taitano, William T.; Knoll, Dana
In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
2015-01-01
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency. PMID:24625009
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1984-01-01
Approximation ideas are discussed that can be used in parameter estimation and feedback control for Euler-Bernoulli models of elastic systems. Focusing on parameter estimation problems, ways by which one can obtain convergence results for cubic spline based schemes for hybrid models involving an elastic cantilevered beam with tip mass and base acceleration are outlined. Sample numerical findings are also presented.
Accelerated path integral methods for atomistic simulations at ultra-low temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, Felix, E-mail: felix.uhl@rub.de; Marx, Dominik; Ceriotti, Michele
2016-08-07
Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated pathmore » integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH{sub 5}{sup +}. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH{sub 4} to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.« less
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Convergence acceleration of computer methods for grounding analysis in stratified soils
NASA Astrophysics Data System (ADS)
Colominas, I.; París, J.; Navarrina, F.; Casteleiro, M.
2010-06-01
The design of safe grounding systems in electrical installations is essential to assure the protection of the equipment, the power supply continuity and the security of the persons. In order to achieve these goals, it is necessary to compute the equivalent electrical resistance of the system and the potential distribution on the earth surface when a fault condition occurs. In the last years the authors have developed a numerical formulation based on the BEM for the analysis of grounding systems embedded in uniform and layered soils. As it is known, in practical cases the underlying series have a poor rate of convergence and the use of multilayer soils requires an out of range computational cost. In this paper we present an efficient technique based on the Aitken δ2-process in order to improve the rate of convergence of the involved series expansions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus
2015-09-03
Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Partha, Raghavendran; Chauhan, Bharesh K; Ferreira, Zelia; Robinson, Joseph D; Lathrop, Kira; Nischal, Ken K
2017-01-01
The underground environment imposes unique demands on life that have led subterranean species to evolve specialized traits, many of which evolved convergently. We studied convergence in evolutionary rate in subterranean mammals in order to associate phenotypic evolution with specific genetic regions. We identified a strong excess of vision- and skin-related genes that changed at accelerated rates in the subterranean environment due to relaxed constraint and adaptive evolution. We also demonstrate that ocular-specific transcriptional enhancers were convergently accelerated, whereas enhancers active outside the eye were not. Furthermore, several uncharacterized genes and regulatory sequences demonstrated convergence and thus constitute novel candidate sequences for congenital ocular disorders. The strong evidence of convergence in these species indicates that evolution in this environment is recurrent and predictable and can be used to gain insights into phenotype–genotype relationships. PMID:29035697
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
Converging the capabilities of EAP artificial muscles and the requirements of bio-inspired robotics
NASA Astrophysics Data System (ADS)
Hanson, David F.; White, Victor
2004-07-01
The characteristics of Electro-actuated polymers (EAP) are typically considered inadequate for applications in robotics. But in recent years, there has been both dramatic increases in EAP technological capbilities and reductions in power requirements for actuating bio-inspired robotics. As the two trends continue to converge, one may anticipate that dramatic breakthroughs in biologically inspired robotic actuation will result due to the marraige of these technologies. This talk will provide a snapshot of how EAP actuator scientists and roboticists may work together on a common platform to accelerate the growth of both technologies. To demonstrate this concept of a platform to accelerate this convergence, the authors will discuss their work in the niche application of robotic facial expression. In particular, expressive robots appear to be within the range of EAP actuation, thanks to their low force requirements. Several robots will be shown that demonstrate realistic expressions with dramatically decreased force requirements. Also, detailed descriptions will be given of the engineering innovations that have enabled these robotics advancements-most notably, Structured-Porosity Elastomer Materials (SPEMs). SPEM manufacturing techniques create delicate cell-structures in a variety of elastomers that maintain the high elongation characteristics of the mother material, but because of the porisity, behave as sponge-materials, thus lower the force required to emulate facial expressions to levels output by several extant EAP actuators.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Multigrid solution of internal flows using unstructured solution adaptive meshes
NASA Technical Reports Server (NTRS)
Smith, Wayne A.; Blake, Kenneth R.
1992-01-01
This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.
Interactive real time flow simulations
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1990-01-01
An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.
Berker, Yannick; Karp, Joel S; Schulz, Volkmar
2017-09-01
The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.
Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems
NASA Astrophysics Data System (ADS)
Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming
2018-06-01
Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
Numerical Investigation of Dual-Mode Scramjet Combustor with Large Upstream Interaction
NASA Technical Reports Server (NTRS)
Mohieldin, T. O.; Tiwari, S. N.; Reubush, David E. (Technical Monitor)
2004-01-01
Dual-mode scramjet combustor configuration with significant upstream interaction is investigated numerically, The possibility of scaling the domain to accelerate the convergence and reduce the computational time is explored. The supersonic combustor configuration was selected to provide an understanding of key features of upstream interaction and to identify physical and numerical issues relating to modeling of dual-mode configurations. The numerical analysis was performed with vitiated air at freestream Math number of 2.5 using hydrogen as the sonic injectant. Results are presented for two-dimensional models and a three-dimensional jet-to-jet symmetric geometry. Comparisons are made with experimental results. Two-dimensional and three-dimensional results show substantial oblique shock train reaching upstream of the fuel injectors. Flow characteristics slow numerical convergence, while the upstream interaction slowly increases with further iterations. As the flow field develops, the symmetric assumption breaks down. A large separation zone develops and extends further upstream of the step. This asymmetric flow structure is not seen in the experimental data. Results obtained using a sub-scale domain (both two-dimensional and three-dimensional) qualitatively recover the flow physics obtained from full-scale simulations. All results show that numerical modeling using a scaled geometry provides good agreement with full-scale numerical results and experimental results for this configuration. This study supports the argument that numerical scaling is useful in simulating dual-mode scramjet combustor flowfields and could provide an excellent convergence acceleration technique for dual-mode simulations.
An efficient higher order family of root finders
NASA Astrophysics Data System (ADS)
Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.
2008-06-01
A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.
NASA Astrophysics Data System (ADS)
Ebrahimi, Mehdi; Jahangirian, Alireza
2017-12-01
An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier-Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.
Hundreds of Genes Experienced Convergent Shifts in Selective Pressure in Marine Mammals
Chikina, Maria; Robinson, Joseph D.; Clark, Nathan L.
2016-01-01
Abstract Mammal species have made the transition to the marine environment several times, and their lineages represent one of the classical examples of convergent evolution in morphological and physiological traits. Nevertheless, the genetic mechanisms of their phenotypic transition are poorly understood, and investigations into convergence at the molecular level have been inconclusive. While past studies have searched for convergent changes at specific amino acid sites, we propose an alternative strategy to identify those genes that experienced convergent changes in their selective pressures, visible as changes in evolutionary rate specifically in the marine lineages. We present evidence of widespread convergence at the gene level by identifying parallel shifts in evolutionary rate during three independent episodes of mammalian adaptation to the marine environment. Hundreds of genes accelerated their evolutionary rates in all three marine mammal lineages during their transition to aquatic life. These marine-accelerated genes are highly enriched for pathways that control recognized functional adaptations in marine mammals, including muscle physiology, lipid-metabolism, sensory systems, and skin and connective tissue. The accelerations resulted from both adaptive evolution as seen in skin and lung genes, and loss of function as in gustatory and olfactory genes. In regard to sensory systems, this finding provides further evidence that reduced senses of taste and smell are ubiquitous in marine mammals. Our analysis demonstrates the feasibility of identifying genes underlying convergent organism-level characteristics on a genome-wide scale and without prior knowledge of adaptations, and provides a powerful approach for investigating the physiological functions of mammalian genes. PMID:27329977
Multi-chain Markov chain Monte Carlo methods for computationally expensive models
NASA Astrophysics Data System (ADS)
Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.
2017-12-01
Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spanner, Michael; Batista, Victor S.; Brumer, Paul
2005-02-22
The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less
An Approach to Speed up Single-Frequency PPP Convergence with Quad-Constellation GNSS and GIM.
Cai, Changsheng; Gong, Yangzhao; Gao, Yang; Kuang, Cuilin
2017-06-06
The single-frequency precise point positioning (PPP) technique has attracted increasing attention due to its high accuracy and low cost. However, a very long convergence time, normally a few hours, is required in order to achieve a positioning accuracy level of a few centimeters. In this study, an approach is proposed to accelerate the single-frequency PPP convergence by combining quad-constellation global navigation satellite system (GNSS) and global ionospheric map (GIM) data. In this proposed approach, the GPS, GLONASS, BeiDou, and Galileo observations are directly used in an uncombined observation model and as a result the ionospheric and hardware delay (IHD) can be estimated together as a single unknown parameter. The IHD values acquired from the GIM product and the multi-GNSS differential code bias (DCB) product are then utilized as pseudo-observables of the IHD parameter in the observation model. A time varying weight scheme has also been proposed for the pseudo-observables to gradually decrease its contribution to the position solutions during the convergence period. To evaluate the proposed approach, datasets from twelve Multi-GNSS Experiment (MGEX) stations on seven consecutive days are processed and analyzed. The numerical results indicate that the single-frequency PPP with quad-constellation GNSS and GIM data are able to reduce the convergence time by 56%, 47%, 41% in the east, north, and up directions compared to the GPS-only single-frequency PPP.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Application of an unstructured grid flow solver to planes, trains and automobiles
NASA Technical Reports Server (NTRS)
Spragle, Gregory S.; Smith, Wayne A.; Yadlin, Yoram
1993-01-01
Rampant, an unstructured flow solver developed at Fluent Inc., is used to compute three-dimensional, viscous, turbulent, compressible flow fields within complex solution domains. Rampant is an explicit, finite-volume flow solver capable of computing flow fields using either triangular (2d) or tetrahedral (3d) unstructured grids. Local time stepping, implicit residual smoothing, and multigrid techniques are used to accelerate the convergence of the explicit scheme. The paper describes the Rampant flow solver and presents flow field solutions about a plane, train, and automobile.
Preconditioned upwind methods to solve 3-D incompressible Navier-Stokes equations for viscous flows
NASA Technical Reports Server (NTRS)
Hsu, C.-H.; Chen, Y.-M.; Liu, C. H.
1990-01-01
A computational method for calculating low-speed viscous flowfields is developed. The method uses the implicit upwind-relaxation finite-difference algorithm with a nonsingular eigensystem to solve the preconditioned, three-dimensional, incompressible Navier-Stokes equations in curvilinear coordinates. The technique of local time stepping is incorporated to accelerate the rate of convergence to a steady-state solution. An extensive study of optimizing the preconditioned system is carried out for two viscous flow problems. Computed results are compared with analytical solutions and experimental data.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Fast divide-and-conquer algorithm for evaluating polarization in classical force fields
NASA Astrophysics Data System (ADS)
Nocito, Dominique; Beran, Gregory J. O.
2017-03-01
Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.
Particle Acceleration in Two Converging Shocks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xin; Wang, Na; Shan, Hao
2017-06-20
Observations by spacecraft such as ACE , STEREO , and others show that there are proton spectral “breaks” with energy E {sub br} at 1–10 MeV in some large CME-driven shocks. Generally, a single shock with the diffusive acceleration mechanism would not predict the “broken” energy spectrum. The present paper focuses on two converging shocks to identify this energy spectral feature. In this case, the converging shocks comprise one forward CME-driven shock on 2006 December 13 and another backward Earth bow shock. We simulate the detailed particle acceleration processes in the region of the converging shocks using the Monte Carlomore » method. As a result, we not only obtain an extended energy spectrum with an energy “tail” up to a few 10 MeV higher than that in previous single shock model, but also we find an energy spectral “break” occurring on ∼5.5 MeV. The predicted energy spectral shape is consistent with observations from multiple spacecraft. The spectral “break,” then, in this case is caused by the interaction between the CME shock and Earth’s bow shock, and otherwise would not be present if Earth were not in the path of the CME.« less
The solution of the point kinetics equations via converged accelerated Taylor series (CATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.; Picca, P.; Previti, A.
This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less
A comparison of acceleration methods for solving the neutron transport k-eigenvalue problem
NASA Astrophysics Data System (ADS)
Willert, Jeffrey; Park, H.; Knoll, D. A.
2014-10-01
Over the past several years a number of papers have been written describing modern techniques for numerically computing the dominant eigenvalue of the neutron transport criticality problem. These methods fall into two distinct categories. The first category of methods rewrite the multi-group k-eigenvalue problem as a nonlinear system of equations and solve the resulting system using either a Jacobian-Free Newton-Krylov (JFNK) method or Nonlinear Krylov Acceleration (NKA), a variant of Anderson Acceleration. These methods are generally successful in significantly reducing the number of transport sweeps required to compute the dominant eigenvalue. The second category of methods utilize Moment-Based Acceleration (or High-Order/Low-Order (HOLO) Acceleration). These methods solve a sequence of modified diffusion eigenvalue problems whose solutions converge to the solution of the original transport eigenvalue problem. This second class of methods is, in our experience, always superior to the first, as most of the computational work is eliminated by the acceleration from the LO diffusion system. In this paper, we review each of these methods. Our computational results support our claim that the choice of which nonlinear solver to use, JFNK or NKA, should be secondary. The primary computational savings result from the implementation of a HOLO algorithm. We display computational results for a series of challenging multi-dimensional test problems.
Photonic band structures solved by a plane-wave-based transfer-matrix method.
Li, Zhi-Yuan; Lin, Lan-Lan
2003-04-01
Transfer-matrix methods adopting a plane-wave basis have been routinely used to calculate the scattering of electromagnetic waves by general multilayer gratings and photonic crystal slabs. In this paper we show that this technique, when combined with Bloch's theorem, can be extended to solve the photonic band structure for 2D and 3D photonic crystal structures. Three different eigensolution schemes to solve the traditional band diagrams along high-symmetry lines in the first Brillouin zone of the crystal are discussed. Optimal rules for the Fourier expansion over the dielectric function and electromagnetic fields with discontinuities occurring at the boundary of different material domains have been employed to accelerate the convergence of numerical computation. Application of this method to an important class of 3D layer-by-layer photonic crystals reveals the superior convergency of this different approach over the conventional plane-wave expansion method.
A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Edwards, Jack R.; Mcrae, D. S.
1992-01-01
A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.
Nonnegative least-squares image deblurring: improved gradient projection approaches
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.
2010-02-01
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
Benzi, Michele; Evans, Thomas M.; Hamilton, Steven P.; ...
2017-03-05
Here, we consider hybrid deterministic-stochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
Anderson acceleration and application to the three-temperature energy equations
NASA Astrophysics Data System (ADS)
An, Hengbin; Jia, Xiaowei; Walker, Homer F.
2017-10-01
The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.
An Approach to Speed up Single-Frequency PPP Convergence with Quad-Constellation GNSS and GIM
Cai, Changsheng; Gong, Yangzhao; Gao, Yang; Kuang, Cuilin
2017-01-01
The single-frequency precise point positioning (PPP) technique has attracted increasing attention due to its high accuracy and low cost. However, a very long convergence time, normally a few hours, is required in order to achieve a positioning accuracy level of a few centimeters. In this study, an approach is proposed to accelerate the single-frequency PPP convergence by combining quad-constellation global navigation satellite system (GNSS) and global ionospheric map (GIM) data. In this proposed approach, the GPS, GLONASS, BeiDou, and Galileo observations are directly used in an uncombined observation model and as a result the ionospheric and hardware delay (IHD) can be estimated together as a single unknown parameter. The IHD values acquired from the GIM product and the multi-GNSS differential code bias (DCB) product are then utilized as pseudo-observables of the IHD parameter in the observation model. A time varying weight scheme has also been proposed for the pseudo-observables to gradually decrease its contribution to the position solutions during the convergence period. To evaluate the proposed approach, datasets from twelve Multi-GNSS Experiment (MGEX) stations on seven consecutive days are processed and analyzed. The numerical results indicate that the single-frequency PPP with quad-constellation GNSS and GIM data are able to reduce the convergence time by 56%, 47%, 41% in the east, north, and up directions compared to the GPS-only single-frequency PPP. PMID:28587305
NASA Astrophysics Data System (ADS)
Pickworth, L. A.; Hammel, B. A.; Smalyuk, V. A.; MacPhee, A. G.; Scott, H. A.; Robey, H. F.; Landen, O. L.; Barrios, M. A.; Regan, S. P.; Schneider, M. B.; Hoppe, M.; Kohut, T.; Holunga, D.; Walters, C.; Haid, B.; Dayton, M.
2016-07-01
First measurements of hydrodynamic growth near peak implosion velocity in an inertial confinement fusion (ICF) implosion at the National Ignition Facility were obtained using a self-radiographing technique and a preimposed Legendre mode 40, λ =140 μ m , sinusoidal perturbation. These are the first measurements of the total growth at the most unstable mode from acceleration Rayleigh-Taylor achieved in any ICF experiment to date, showing growth of the areal density perturbation of ˜7000 × . Measurements were made at convergences of ˜5 to ˜10 × at both the waist and pole of the capsule, demonstrating simultaneous measurements of the growth factors from both lines of sight. The areal density growth factors are an order of magnitude larger than prior experimental measurements and differed by ˜2 × between the waist and the pole, showing asymmetry in the measured growth factors. These new measurements significantly advance our ability to diagnose perturbations detrimental to ICF implosions, uniquely intersecting the change from an accelerating to decelerating shell, with multiple simultaneous angular views.
NASA Astrophysics Data System (ADS)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.
2018-01-01
We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.
Pickworth, L. A.; Hammel, B. A.; Smalyuk, V. A.; ...
2016-07-11
First measurements of hydrodynamic growth near peak implosion velocity in an inertial confinement fusion (ICF) implosion at the National Ignition Facility were obtained using a self-radiographing technique and a preimposed Legendre mode 40, λ = 140 μm, sinusoidal perturbation. These are the first measurements of the total growth at the most unstable mode from acceleration Rayleigh-Taylor achieved in any ICF experiment to date, showing growth of the areal density perturbation of ~7000×. Measurements were made at convergences of ~5 to ~10× at both the waist and pole of the capsule, demonstrating simultaneous measurements of the growth factors from both linesmore » of sight. The areal density growth factors are an order of magnitude larger than prior experimental measurements and differed by ~2× between the waist and the pole, showing asymmetry in the measured growth factors. As a result, these new measurements significantly advance our ability to diagnose perturbations detrimental to ICF implosions, uniquely intersecting the change from an accelerating to decelerating shell, with multiple simultaneous angular views.« less
A structure preserving Lanczos algorithm for computing the optical absorption spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Jornada, Felipe H. da; Lin, Lin
2016-11-16
We present a new structure preserving Lanczos algorithm for approximating the optical absorption spectrum in the context of solving full Bethe-Salpeter equation without Tamm-Dancoff approximation. The new algorithm is based on a structure preserving Lanczos procedure, which exploits the special block structure of Bethe-Salpeter Hamiltonian matrices. A recently developed technique of generalized averaged Gauss quadrature is incorporated to accelerate the convergence. We also establish the connection between our structure preserving Lanczos procedure with several existing Lanczos procedures developed in different contexts. Numerical examples are presented to demonstrate the effectiveness of our Lanczos algorithm.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
Holstein, Gay R; Rabbitt, Richard D; Martinelli, Giorgio P; Friedrich, Victor L; Boyle, Richard D; Highstein, Stephen M
2004-11-02
The vestibular semicircular canals respond to angular acceleration that is integrated to angular velocity by the biofluid mechanics of the canals and is the primary origin of afferent responses encoding velocity. Surprisingly, some afferents actually report angular acceleration. Our data indicate that hair-cell/afferent synapses introduce a mathematical derivative in these afferents that partially cancels the biomechanical integration and results in discharge rates encoding angular acceleration. We examined the role of convergent synaptic inputs from hair cells to this mathematical differentiation. A significant reduction in the order of the differentiation was observed for low-frequency stimuli after gamma-aminobutyric acid type B receptor antagonist administration. Results demonstrate that gamma-aminobutyric acid participates in shaping the temporal dynamics of afferent responses.
Accelerated convergence for synchronous approximate agreement
NASA Technical Reports Server (NTRS)
Kearns, J. P.; Park, S. K.; Sjogren, J. A.
1988-01-01
The protocol for synchronous approximate agreement presented by Dolev et. al. exhibits the undesirable property that a faulty processor, by the dissemination of a value arbitrarily far removed from the values held by good processors, may delay the termination of the protocol by an arbitrary amount of time. Such behavior is clearly undesirable in a fault tolerant dynamic system subject to hard real-time constraints. A mechanism is presented by which editing data suspected of being from Byzantine-failed processors can lead to quicker, predictable, convergence to an agreement value. Under specific assumptions about the nature of values transmitted by failed processors relative to those transmitted by good processors, a Monte Carlo simulation is presented whose qualitative results illustrate the trade-off between accelerated convergence and the accuracy of the value agreed upon.
Multiple-grid convergence acceleration of viscous and inviscid flow computations
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1983-01-01
A multiple-grid algorithm for use in efficiently obtaining steady solution to the Euler and Navier-Stokes equations is presented. The convergence of a simple, explicit fine-grid solution procedure is accelerated on a sequence of successively coarser grids by a coarse-grid information propagation method which rapidly eliminates transients from the computational domain. This use of multiple-gridding to increase the convergence rate results in substantially reduced work requirements for the numerical solution of a wide range of flow problems. Computational results are presented for subsonic and transonic inviscid flows and for laminar and turbulent, attached and separated, subsonic viscous flows. Work reduction factors as large as eight, in comparison to the basic fine-grid algorithm, were obtained. Possibilities for further performance improvement are discussed.
The use of the virtual source technique in computing scattering from periodic ocean surfaces.
Abawi, Ahmad T
2011-08-01
In this paper the virtual source technique is used to compute scattering of a plane wave from a periodic ocean surface. The virtual source technique is a method of imposing boundary conditions using virtual sources, with initially unknown complex amplitudes. These amplitudes are then determined by applying the boundary conditions. The fields due to these virtual sources are given by the environment Green's function. In principle, satisfying boundary conditions on an infinite surface requires an infinite number of sources. In this paper, the periodic nature of the surface is employed to populate a single period of the surface with virtual sources and m surface periods are added to obtain scattering from the entire surface. The use of an accelerated sum formula makes it possible to obtain a convergent sum with relatively small number of terms (∼40). The accuracy of the technique is verified by comparing its results with those obtained using the integral equation technique.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1991-01-01
Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.
NASA Astrophysics Data System (ADS)
Reem, Daniel; De Pierro, Alvaro
2017-04-01
Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexact versions of several accelerated proximal methods aiming at solving this minimization problem. This paper shows that inexact versions of a method of Beck and Teboulle (fast iterative shrinkable tresholding algorithm) preserve, in a Hilbert space setting, the same (non-asymptotic) rate of convergence under some assumptions on the decay rate of the error terms The notion of inexactness discussed here seems to be rather simple, but, interestingly, when comparing to related works, closely related decay rates of the errors terms yield closely related convergence rates. The derivation sheds some light on the somewhat mysterious origin of some parameters which appear in various accelerated methods. A consequence of the analysis is that the accelerated method is perturbation resilient, making it suitable, in principle, for the superiorization methodology. By taking this into account, we re-examine the superiorization methodology and significantly extend its scope. This work was supported by FAPESP 2013/19504-9. The second author was supported also by CNPq grant 306030/2014-4.
NASA Astrophysics Data System (ADS)
Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.
2017-06-01
An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.
NASA Astrophysics Data System (ADS)
Zheng, Lianqing; Yang, Wei
2008-07-01
Recently, accelerated molecular dynamics (AMD) technique was generalized to realize essential energy space random walks so that further sampling enhancement and effective localized enhanced sampling could be achieved. This method is especially meaningful when essential coordinates of the target events are not priori known; moreover, the energy space metadynamics method was also introduced so that biasing free energy functions can be robustly generated. Despite the promising features of this method, due to the nonequilibrium nature of the metadynamics recursion, it is challenging to rigorously use the data obtained at the recursion stage to perform equilibrium analysis, such as free energy surface mapping; therefore, a large amount of data ought to be wasted. To resolve such problem so as to further improve simulation convergence, as promised in our original paper, we are reporting an alternate approach: the adaptive-length self-healing (ALSH) strategy for AMD simulations; this development is based on a recent self-healing umbrella sampling method. Here, the unit simulation length for each self-healing recursion is increasingly updated based on the Wang-Landau flattening judgment. When the unit simulation length for each update is long enough, all the following unit simulations naturally run into the equilibrium regime. Thereafter, these unit simulations can serve for the dual purposes of recursion and equilibrium analysis. As demonstrated in our model studies, by applying ALSH, both fast recursion and short nonequilibrium data waste can be compromised. As a result, combining all the data obtained from all the unit simulations that are in the equilibrium regime via the weighted histogram analysis method, efficient convergence can be robustly ensured, especially for the purpose of free energy surface mapping.
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Kunze, D.; Hummer, D. G.; Rybicki, G. B.
1991-01-01
An iterative method based on the use of approximate transfer operators, which was designed initially to solve multilevel NLTE line formation problems in stellar atmospheres, is adapted and applied to the solution of the NLTE molecular band radiative transfer in planetary atmospheres. The matrices to be constructed and inverted are much smaller than those used in the traditional Curtis matrix technique, which makes possible the treatment of more realistic problems using relatively small computers. This technique converges much more rapidly than straightforward iteration between the transfer equation and the equations of statistical equilibrium. A test application of this new technique to the solution of NLTE radiative transfer problems for optically thick and thin bands (the 4.3 micron CO2 band in the Venusian atmosphere and the 4.7 and 2.3 micron CO bands in the earth's atmosphere) is described.
Nikazad, T; Davidi, R; Herman, G. T.
2013-01-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
ERIC Educational Resources Information Center
Kolodzy, Janet; Grant, August E.; DeMars, Tony R.; Wilkinson, Jeffrey S.
2014-01-01
The emergence of the Internet, social media, and digital technologies in the twenty-first century accelerated an evolution in journalism and communication that fit under the broad term of convergence. That evolution changed the relationship between news producers and consumers. It broke down the geographical boundaries in defining our communities,…
Experimental study on a heavy-gas cylinder accelerated by cylindrical converging shock waves
NASA Astrophysics Data System (ADS)
Si, T.; Zhai, Z.; Luo, X.; Yang, J.
2014-01-01
The Richtmyer-Meshkov instability behavior of a heavy-gas cylinder accelerated by a cylindrical converging shock wave is studied experimentally. A curved wall profile is well-designed based on the shock dynamics theory [Phys. Fluids, 22: 041701 (2010)] with an incident planar shock Mach number of 1.2 and a converging angle of in a mm square cross-section shock tube. The cylinder mixed with the glycol droplets flows vertically through the test section and is illuminated horizontally by a laser sheet. The images obtained only one per run by an ICCD (intensified charge coupled device) combined with a pulsed Nd:YAG laser are first presented and the complete evolution process of the cylinder is then captured in a single test shot by a high-speed video camera combined with a high-power continuous laser. In this way, both the developments of the first counter-rotating vortex pair and the second counter-rotating vortex pair with an opposite rotating direction from the first one are observed. The experimental results indicate that the phenomena induced by the converging shock wave and the reflected shock formed from the center of convergence are distinct from those found in the planar shock case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Xiaozhou; Gan, Weiqun; Xia, Chun
2017-06-01
In this paper, we study how a flux rope (FR) is formed and evolves into the corresponding structure of a coronal mass ejection (CME) numerically driven by photospheric converging motion. A two-and-a-half-dimensional magnetohydrodynamics simulation is conducted in a chromosphere-transition-corona setup. The initial arcade-like linear force-free magnetic field is driven by an imposed slow motion converging toward the magnetic inversion line at the bottom boundary. The convergence brings opposite-polarity magnetic flux to the polarity inversion, giving rise to the formation of an FR by magnetic reconnection and eventually to the eruption of a CME. During the FR formation, an embedded prominencemore » gets formed by the levitation of chromospheric material. We confirm that the converging flow is a potential mechanism for the formation of FRs and a possible triggering mechanism for CMEs. We investigate the thermal, dynamical, and magnetic properties of the FR and its embedded prominence by tracking their thermal evolution, analyzing their force balance, and measuring their kinematic quantities. The phase transition from the initiation phase to the acceleration phase of the kinematic evolution of the FR was observed in our simulation. The FR undergoes a series of quasi-static equilibrium states in the initiation phase; while in the acceleration phase the FR is driven by Lorentz force and the impulsive acceleration occurs. The underlying physical reason for the phase transition is the change of the reconnection mechanism from the Sweet–Parker to the unsteady bursty regime of reconnection in the evolving current sheet underneath the FR.« less
Sheu, R J; Sheu, R D; Jiang, S H; Kao, C H
2005-01-01
Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted.
Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis
NASA Technical Reports Server (NTRS)
Warren, Gary Patrick
1990-01-01
A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.
Towards an optimal flow: Density-of-states-informed replica-exchange simulations
Vogel, Thomas; Perez, Danny
2015-11-05
Here we learn that replica exchange (RE) is one of the most popular enhanced-sampling simulations technique in use today. Despite widespread successes, RE simulations can sometimes fail to converge in practical amounts of time, e.g., when sampling around phase transitions, or when a few hard-to-find configurations dominate the statistical averages. We introduce a generalized RE scheme, density-of-states-informed RE, that addresses some of these challenges. The key feature of our approach is to inform the simulation with readily available, but commonly unused, information on the density of states of the system as the RE simulation proceeds. This enables two improvements, namely,more » the introduction of resampling moves that actively move the system towards equilibrium and the continual adaptation of the optimal temperature set. As a consequence of these two innovations, we show that the configuration flow in temperature space is optimized and that the overall convergence of RE simulations can be dramatically accelerated.« less
CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres
NASA Astrophysics Data System (ADS)
Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli
2017-09-01
CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Harada, Kyouji; Morimoto, Munenori; Sakakihara, Michio
2004-03-01
Several preconditioned iterative methods reported in the literature have been used for improving the convergence rate of the Gauss-Seidel method. In this article, on the basis of nonnegative matrix, comparisons between some splittings for such preconditioned matrices are derived. Simple numerical examples are also given.
Naming game with biased assimilation over adaptive networks
NASA Astrophysics Data System (ADS)
Fu, Guiyuan; Zhang, Weidong
2018-01-01
The dynamics of two-word naming game incorporating the influence of biased assimilation over adaptive network is investigated in this paper. Firstly an extended naming game with biased assimilation (NGBA) is proposed. The hearer in NGBA accepts the received information in a biased manner, where he may refuse to accept the conveyed word from the speaker with a predefined probability, if the conveyed word is different from his current memory. Secondly, the adaptive network is formulated by rewiring the links. Theoretical analysis is developed to show that the population in NGBA will eventually reach global consensus on either A or B. Numerical simulation results show that the larger strength of biased assimilation on both words, the slower convergence speed, while larger strength of biased assimilation on only one word can slightly accelerate the convergence; larger population size can make the rate of convergence slower to a large extent when it increases from a relatively small size, while such effect becomes minor when the population size is large; the behavior of adaptively reconnecting the existing links can greatly accelerate the rate of convergence especially on the sparse connected network.
Convergence of the Ponderomotive Guiding Center approximation in the LWFA
NASA Astrophysics Data System (ADS)
Silva, Thales; Vieira, Jorge; Helm, Anton; Fonseca, Ricardo; Silva, Luis
2017-10-01
Plasma accelerators arose as potential candidates for future accelerator technology in the last few decades because of its predicted compactness and low cost. One of the proposed designs for plasma accelerators is based on Laser Wakefield Acceleration (LWFA). However, simulations performed for such systems have to solve the laser wavelength which is orders of magnitude lower than the plasma wavelength. In this context, the Ponderomotive Guiding Center (PGC) algorithm for particle-in-cell (PIC) simulations is a potent tool. The laser is approximated by its envelope which leads to a speed-up of around 100 times because the laser wavelength is not solved. The plasma response is well understood, and comparison with the full PIC code show an excellent agreement. However, for LWFA, the convergence of the self-injected beam parameters, such as energy and charge, was not studied before and has vital importance for the use of the algorithm in predicting the beam parameters. Our goal is to do a thorough investigation of the stability and convergence of the algorithm in situations of experimental relevance for LWFA. To this end, we perform simulations using the PGC algorithm implemented in the PIC code OSIRIS. To verify the PGC predictions, we compare the results with full PIC simulations. This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No 653782.
Li, Bo; Li, Shuang; Wu, Junfeng; Qi, Hongsheng
2018-02-09
This paper establishes a framework of quantum clique gossiping by introducing local clique operations to networks of interconnected qubits. Cliques are local structures in complex networks being complete subgraphs, which can be used to accelerate classical gossip algorithms. Based on cyclic permutations, clique gossiping leads to collective multi-party qubit interactions. We show that at reduced states, these cliques have the same acceleration effects as their roles in accelerating classical gossip algorithms. For randomized selection of cliques, such improved rate of convergence is precisely characterized. On the other hand, the rate of convergence at the coherent states of the overall quantum network is proven to be decided by the spectrum of a mean-square error evolution matrix. Remarkably, the use of larger quantum cliques does not necessarily increase the speed of the network density aggregation, suggesting quantum network dynamics is not entirely decided by its classical topology.
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; ...
2017-09-14
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
NASA Technical Reports Server (NTRS)
Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.
2011-01-01
rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for component-loaded curved orthogrid panels typical of launch vehicle skin structures. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was applied to correlate the measured input sound pressures across the energized panel. This application quantifies the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software developed for the RPTF method allows easy replacement of the diffuse acoustic field with other pressure fields such as a turbulent boundary layer (TBL) model suitable for vehicle ascent. Structural responses using a TBL model were demonstrated, and wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this environment. Finally, design load factors were developed from the measured and predicted responses and compared with those derived from traditional techniques such as historical Mass Acceleration Curves and Barrett scaling methods for acreage and component-loaded panels.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
Fast sparse recovery and coherence factor weighting in optoacoustic tomography
NASA Astrophysics Data System (ADS)
He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis
2017-03-01
Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.
New expansion rate measurements of the Crab nebula in radio and optical
NASA Astrophysics Data System (ADS)
Bietenholz, M. F.; Nugent, R. L.
2015-12-01
We present new radio measurements of the expansion rate of the Crab nebula's synchrotron nebula over a ˜30-yr period. We find a convergence date for the radio synchrotron nebula of CE 1255 ± 27. We also re-evaluated the expansion rate of the optical-line-emitting filaments, and we show that the traditional estimates of their convergence date are slightly biased. Using an unbiased Bayesian analysis, we find a convergence date for the filaments of CE 1091 ± 34 (˜40 yr earlier than previous estimates). Our results show that both the synchrotron nebula and the optical-line-emitting filaments have been accelerated since the explosion in CE 1054, but that the synchrotron nebula has been relatively strongly accelerated, while the optical filaments have been only slightly accelerated. The finding that the synchrotron emission expands more rapidly than the filaments supports the picture that the latter are the result of the Rayleigh-Taylor instability at the interface between the pulsar-wind nebula and the surrounding freely expanding supernova ejecta, and rules out models where the pulsar-wind bubble is interacting directly with the pre-supernova wind of the Crab's progenitor.
NASA Technical Reports Server (NTRS)
Jameson, A.
1975-01-01
The use of a fast elliptic solver in combination with relaxation is presented as an effective way to accelerate the convergence of transonic flow calculations, particularly when a marching scheme can be used to treat the supersonic zone in the relaxation process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
MIBPB: a software package for electrostatic analysis.
Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei
2011-03-01
The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This article presents a matched interface and boundary (MIB)-based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique-based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces, which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second-order convergence, that is, the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping technique that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1 Å--whereas it usually takes other traditional PB solvers 0.25 Å to reach similar level of reliability. This work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by using the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate KS solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. Copyright © 2010 Wiley Periodicals, Inc.
MIBPB: A software package for electrostatic analysis
Chen, Duan; Chen, Zhan; Chen, Changjun; Geng, Weihua; Wei, Guo-Wei
2010-01-01
The Poisson-Boltzmann equation (PBE) is an established model for the electrostatic analysis of biomolecules. The development of advanced computational techniques for the solution of the PBE has been an important topic in the past two decades. This paper presents a matched interface and boundary (MIB) based PBE software package, the MIBPB solver, for electrostatic analysis. The MIBPB has a unique feature that it is the first interface technique based PBE solver that rigorously enforces the solution and flux continuity conditions at the dielectric interface between the biomolecule and the solvent. For protein molecular surfaces which may possess troublesome geometrical singularities, the MIB scheme makes the MIBPB by far the only existing PBE solver that is able to deliver the second order convergence, i.e., the accuracy increases four times when the mesh size is halved. The MIBPB method is also equipped with a Dirichlet-to-Neumann mapping (DNM) technique, that builds a Green's function approach to analytically resolve the singular charge distribution in biomolecules in order to obtain reliable solutions at meshes as coarse as 1Å — while it usually takes other traditional PB solvers 0.25Å to reach similar level of reliability. The present work further accelerates the rate of convergence of linear equation systems resulting from the MIBPB by utilizing the Krylov subspace (KS) techniques. Condition numbers of the MIBPB matrices are significantly reduced by using appropriate Krylov subspace solver and preconditioner combinations. Both linear and nonlinear PBE solvers in the MIBPB package are tested by protein-solvent solvation energy calculations and analysis of salt effects on protein-protein binding energies, respectively. PMID:20845420
Unified gas-kinetic scheme with multigrid convergence for rarefied flow study
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2017-09-01
The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.
Logan, Heather; Wolfaardt, Johan; Boulanger, Pierre; Hodgetts, Bill; Seikaly, Hadi
2013-06-19
It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Fifteen important issues were extracted from the convergent interviews. In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field.
2013-01-01
Background It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Methods Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Results Fifteen important issues were extracted from the convergent interviews. Conclusion In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field. PMID:23782771
Two-dimensional spatiotemporal coding of linear acceleration in vestibular nuclei neurons
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Bush, G. A.; Perachio, A. A.
1993-01-01
Response properties of vertical (VC) and horizontal (HC) canal/otolith-convergent vestibular nuclei neurons were studied in decerebrate rats during stimulation with sinusoidal linear accelerations (0.2-1.4 Hz) along different directions in the head horizontal plane. A novel characteristic of the majority of tested neurons was the nonzero response often elicited during stimulation along the "null" direction (i.e., the direction perpendicular to the maximum sensitivity vector, Smax). The tuning ratio (Smin gain/Smax gain), a measure of the two-dimensional spatial sensitivity, depended on stimulus frequency. For most vestibular nuclei neurons, the tuning ratio was small at the lowest stimulus frequencies and progressively increased with frequency. Specifically, HC neurons were characterized by a flat Smax gain and an approximately 10-fold increase of Smin gain per frequency decade. Thus, these neurons encode linear acceleration when stimulated along their maximum sensitivity direction, and the rate of change of linear acceleration (jerk) when stimulated along their minimum sensitivity direction. While the Smax vectors were distributed throughout the horizontal plane, the Smin vectors were concentrated mainly ipsilaterally with respect to head acceleration and clustered around the naso-occipital head axis. The properties of VC neurons were distinctly different from those of HC cells. The majority of VC cells showed decreasing Smax gains and small, relatively flat, Smin gains as a function of frequency. The Smax vectors were distributed ipsilaterally relative to the induced (apparent) head tilt. In type I anterior or posterior VC neurons, Smax vectors were clustered around the projection of the respective ipsilateral canal plane onto the horizontal head plane. These distinct spatial and temporal properties of HC and VC neurons during linear acceleration are compatible with the spatiotemporal organization of the horizontal and the vertical/torsional ocular responses, respectively, elicited in the rat during linear translation in the horizontal head plane. In addition, the data suggest a spatially and temporally specific and selective otolith/canal convergence. We propose that the central otolith system is organized in canal coordinates such that there is a close alignment between the plane of angular acceleration (canal) sensitivity and the plane of linear acceleration (otolith) sensitivity in otolith/canal-convergent vestibular nuclei neurons.
Constrained multi-objective optimization of storage ring lattices
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2018-03-01
The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods
Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
Stakeholder requirements for commercially successful wave energy converter farms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babarit, Aurélien; Bull, Diana; Dykes, Katherine
2017-12-01
In this study, systems engineering techniques are applied to wave energy to identify and specify stakeholders' requirements for a commercially successful wave energy farm. The focus is on the continental scale utility market. Lifecycle stages and stakeholders are identified. Stakeholders' needs across the whole lifecycle of the wave energy farm are analyzed. A list of 33 stakeholder requirements are identified and specified. This list of requirements should serve as components of a technology performance level metric that could be used by investors and funding agencies to make informed decisions when allocating resources. It is hoped that the technology performance levelmore » metric will accelerate wave energy conversion technology convergence.« less
Accurate solutions for transonic viscous flow over finite wings
NASA Technical Reports Server (NTRS)
Vatsa, V. N.
1986-01-01
An explicit multistage Runge-Kutta type time-stepping scheme is used for solving the three-dimensional, compressible, thin-layer Navier-Stokes equations. A finite-volume formulation is employed to facilitate treatment of complex grid topologies encountered in three-dimensional calculations. Convergence to steady state is expedited through usage of acceleration techniques. Further numerical efficiency is achieved through vectorization of the computer code. The accuracy of the overall scheme is evaluated by comparing the computed solutions with the experimental data for a finite wing under different test conditions in the transonic regime. A grid refinement study ir conducted to estimate the grid requirements for adequate resolution of salient features of such flows.
Use of reciprocal lattice layer spacing in electron backscatter diffraction pattern analysis
Michael; Eades
2000-03-01
In the scanning electron microscope using electron backscattered diffraction, it is possible to measure the spacing of the layers in the reciprocal lattice. These values are of great use in confirming the identification of phases. The technique derives the layer spacing from the higher-order Laue zone rings which appear in patterns from many materials. The method adapts results from convergent-beam electron diffraction in the transmission electron microscope. For many materials the measured layer spacing compares well with the calculated layer spacing. A noted exception is for higher atomic number materials. In these cases an extrapolation procedure is described that requires layer spacing measurements at a range of accelerating voltages. This procedure is shown to improve the accuracy of the technique significantly. The application of layer spacing measurements in EBSD is shown to be of use for the analysis of two polytypes of SiC.
NASA Technical Reports Server (NTRS)
Cannizzaro, Frank E.; Ash, Robert L.
1992-01-01
A state-of-the-art computer code has been developed that incorporates a modified Runge-Kutta time integration scheme, upwind numerical techniques, multigrid acceleration, and multi-block capabilities (RUMM). A three-dimensional thin-layer formulation of the Navier-Stokes equations is employed. For turbulent flow cases, the Baldwin-Lomax algebraic turbulence model is used. Two different upwind techniques are available: van Leer's flux-vector splitting and Roe's flux-difference splitting. Full approximation multi-grid plus implicit residual and corrector smoothing were implemented to enhance the rate of convergence. Multi-block capabilities were developed to provide geometric flexibility. This feature allows the developed computer code to accommodate any grid topology or grid configuration with multiple topologies. The results shown in this dissertation were chosen to validate the computer code and display its geometric flexibility, which is provided by the multi-block structure.
NASA Astrophysics Data System (ADS)
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.
An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Lessard, Victor R.
1990-01-01
The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
The implementation of an aeronautical CFD flow code onto distributed memory parallel systems
NASA Astrophysics Data System (ADS)
Ierotheou, C. S.; Forsey, C. R.; Leatham, M.
2000-04-01
The parallelization of an industrially important in-house computational fluid dynamics (CFD) code for calculating the airflow over complex aircraft configurations using the Euler or Navier-Stokes equations is presented. The code discussed is the flow solver module of the SAUNA CFD suite. This suite uses a novel grid system that may include block-structured hexahedral or pyramidal grids, unstructured tetrahedral grids or a hybrid combination of both. To assist in the rapid convergence to a solution, a number of convergence acceleration techniques are employed including implicit residual smoothing and a multigrid full approximation storage scheme (FAS). Key features of the parallelization approach are the use of domain decomposition and encapsulated message passing to enable the execution in parallel using a single programme multiple data (SPMD) paradigm. In the case where a hybrid grid is used, a unified grid partitioning scheme is employed to define the decomposition of the mesh. The parallel code has been tested using both structured and hybrid grids on a number of different distributed memory parallel systems and is now routinely used to perform industrial scale aeronautical simulations. Copyright
NASA Astrophysics Data System (ADS)
Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong
2009-12-01
Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
Parallel and Portable Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
Lee, S. R.; Cummings, J. C.; Nolen, S. D.; Keen, N. D.
1997-08-01
We have developed a multi-group, Monte Carlo neutron transport code in C++ using object-oriented methods and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α eigenvalues of the neutron transport equation on a rectilinear computational mesh. It is portable to and runs in parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities are discussed, along with physics and performance results for several test problems on a variety of hardware, including all three Accelerated Strategic Computing Initiative (ASCI) platforms. Current parallel performance indicates the ability to compute α-eigenvalues in seconds or minutes rather than days or weeks. Current and future work on the implementation of a general transport physics framework (TPF) is also described. This TPF employs modern C++ programming techniques to provide simplified user interfaces, generic STL-style programming, and compile-time performance optimization. Physics capabilities of the TPF will be extended to include continuous energy treatments, implicit Monte Carlo algorithms, and a variety of convergence acceleration techniques such as importance combing.
Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik
2012-12-01
Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.
Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C
2016-10-01
The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.
Bhardwaj, Anuja; Srivastava, Mousami; Pal, Mamta; Sharma, Yogesh Kumar; Bhattacharya, Saikat; Tulsawani, Rajkumar; Sugadev, Ragumani; Misra, Kshipra
2016-01-01
Oriental medicinal mushroom Ganoderma lucidum has been widely used for the promotion of health and longevity owing to its various bioactive constituents. Therefore, comprehending metabolomics of different G. lucidum parts could be of paramount importance for investigating their pharmacological properties. Ultra-performance convergence chromatography (UPC2) along with mass spectrometry (MS) is an emerging technique that has not yet been applied for metabolite profiling of G. lucidum. This study has been undertaken to establish metabolomics of the aqueous extracts of mycelium (GLM), fruiting body (GLF), and their mixture (GLMF) using ultra-performance convergence chromatography single quadrupole mass spectrometry (UPC2-SQD-MS). Aqueous extracts of G. lucidum prepared using an accelerated solvent extraction technique have been characterized for their mycochemical activities in terms of total flavonoid content, 1,1-diphenyl-2-picryl-hydrazyl scavenging activity, and ferric ion reducing antioxidant power. The UPC2-SQD-MS technique has been used for the first time for metabolite profiling of G. lucidum on a Princeton Diol column (4.6 × 250 mm; 5 µm) using supercritical CO2 (solvent) and 20 mM ammonium acetate in methanol (co-solvent). In the present study, UPC2-SQD-MS was found to be a rapid, efficient, and high-throughput analytical technique, whose coupling to principal component analysis (PCA) and phytochemical evaluation could be used as a powerful tool for elucidating metabolite diversity between mycelium and fruiting body of G. lucidum. PCA showed a clear distinction in the metabolite compositions of the samples. Mycochemical studies revealed that overall GLF possessed better antioxidant properties among the aqueous extracts of G. lucidum.
Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward
2016-01-01
Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592
Nonlinear convergence active vibration absorber for single and multiple frequency vibration control
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang
2017-12-01
This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.
Calculation of structural dynamic forces and stresses using mode acceleration
NASA Technical Reports Server (NTRS)
Blelloch, Paul
1989-01-01
While the standard mode acceleration formulation in structural dynamics has often been interpreted to suggest that the reason for improved convergence obtainable is that the dynamic correction factor is divided by the modal frequencies-squared, an alternative formulation is presented which clearly indicates that the only difference between mode acceleration and mode displacement data recovery is the addition of a static correction term. Attention is given to the advantages in numerical implementation associated with this alternative, as well as to an illustrative example.
Adaptive control of turbulence intensity is accelerated by frugal flow sampling.
Quinn, Daniel B; van Halder, Yous; Lentink, David
2017-11-01
The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
NASA Technical Reports Server (NTRS)
Aston, Graeme (Inventor)
1984-01-01
A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids (16, 18) with multiple pairs of aligned holes positioned to direct a group of beamlets (20) along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam (14). An accelerator electrode device (22) downstream from the extraction grids, is at a much lower potential than the grids to accelerate the combined beam.
Numerical Simulation of Dual-Mode Scramjet Combustors
NASA Technical Reports Server (NTRS)
Rodriguez, C. G.; Riggins, D. W.; Bittner, R. D.
2000-01-01
Results of a numerical investigation of a three-dimensional dual-mode scramjet isolator-combustor flow-field are presented. Specifically, the effect of wall cooling on upstream interaction and flow-structure is examined for a case assuming jet-to-jet symmetry within the combustor. Comparisons are made with available experimental wall pressures. The full half-duct for the isolator-combustor is then modeled in order to study the influence of side-walls. Large scale three-dimensionality is observed in the flow with massive separation forward on the side-walls of the duct. A brief review of convergence-acceleration techniques useful in dual-mode simulations is presented, followed by recommendations regarding the development of a reliable and unambiguous experimental data base for guiding CFD code assessments in this area.
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
Hybrid glowworm swarm optimization for task scheduling in the cloud environment
NASA Astrophysics Data System (ADS)
Zhou, Jing; Dong, Shoubin
2018-06-01
In recent years many heuristic algorithms have been proposed to solve task scheduling problems in the cloud environment owing to their optimization capability. This article proposes a hybrid glowworm swarm optimization (HGSO) based on glowworm swarm optimization (GSO), which uses a technique of evolutionary computation, a strategy of quantum behaviour based on the principle of neighbourhood, offspring production and random walk, to achieve more efficient scheduling with reasonable scheduling costs. The proposed HGSO reduces the redundant computation and the dependence on the initialization of GSO, accelerates the convergence and more easily escapes from local optima. The conducted experiments and statistical analysis showed that in most cases the proposed HGSO algorithm outperformed previous heuristic algorithms to deal with independent tasks.
Evolving an Accelerated School Model through Student Perceptions and Student Outcome Data
ERIC Educational Resources Information Center
Braun, Donna L.; Gable, Robert K.; Billups, Felice D.; Vieira, Mary; Blasczak, Danielle
2016-01-01
A mixed methods convergent evaluation informed the redesign of an innovative public school that uses an accelerated model to serve grades 7-9 students who have been retained in grade level and are at risk for dropping out of school. After over 25 years in operation, a shift of practices/policies away from grade retention and toward social…
Otolith-Canal Convergence In Vestibular Nuclei Neurons
NASA Technical Reports Server (NTRS)
Dickman, J. David; Si, Xiao-Hong
2002-01-01
The current final report covers the period from June 1, 1999 to May 31, 2002. The primary objective of the investigation was to determine how information regarding head movements and head position relative to gravity is received and processed by central vestibular nuclei neurons in the brainstem. Specialized receptors in the vestibular labyrinths of the inner ear function to detect angular and linear accelerations of the head, with receptors located in the semicircular canals transducing rotational head movements and receptors located in the otolith organs transducing changes in head position relative to gravity or linear accelerations of the head. The information from these different receptors is then transmitted to central vestibular nuclei neurons which process the input signals, then project the appropriate output information to the eye, head, and body musculature motor neurons to control compensatory reflexes. Although a number of studies have reported on the responsiveness of vestibular nuclei neurons, it has not yet been possible to determine precisely how these cells combine the information from the different angular and linear acceleration receptors into a correct neural output signal. In the present project, rotational and linear motion stimuli were separately delivered while recording responses from vestibular nuclei neurons that were characterized according to direct input from the labyrinth and eye movement sensitivity. Responses from neurons receiving convergent input from the semicircular canals and otolith organs were quantified and compared to non-convergent neurons.
Accelerating evaluation of converged lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Qin, Guangzhao; Hu, Ming
2018-01-01
High-throughput computational materials design is an emerging area in materials science, which is based on the fast evaluation of physical-related properties. The lattice thermal conductivity (κ) is a key property of materials for enormous implications. However, the high-throughput evaluation of κ remains a challenge due to the large resources costs and time-consuming procedures. In this paper, we propose a concise strategy to efficiently accelerate the evaluation process of obtaining accurate and converged κ. The strategy is in the framework of phonon Boltzmann transport equation (BTE) coupled with first-principles calculations. Based on the analysis of harmonic interatomic force constants (IFCs), the large enough cutoff radius (rcutoff), a critical parameter involved in calculating the anharmonic IFCs, can be directly determined to get satisfactory results. Moreover, we find a simple way to largely ( 10 times) accelerate the computations by fast reconstructing the anharmonic IFCs in the convergence test of κ with respect to the rcutof, which finally confirms the chosen rcutoff is appropriate. Two-dimensional graphene and phosphorene along with bulk SnSe are presented to validate our approach, and the long-debate divergence problem of thermal conductivity in low-dimensional systems is studied. The quantitative strategy proposed herein can be a good candidate for fast evaluating the reliable κ and thus provides useful tool for high-throughput materials screening and design with targeted thermal transport properties.
Distinct developmental genetic mechanisms underlie convergently evolved tooth gain in sticklebacks
Ellis, Nicholas A.; Glazer, Andrew M.; Donde, Nikunj N.; Cleves, Phillip A.; Agoglia, Rachel M.; Miller, Craig T.
2015-01-01
Teeth are a classic model system of organogenesis, as repeated and reciprocal epithelial and mesenchymal interactions pattern placode formation and outgrowth. Less is known about the developmental and genetic bases of tooth formation and replacement in polyphyodonts, which are vertebrates with continual tooth replacement. Here, we leverage natural variation in the threespine stickleback fish Gasterosteus aculeatus to investigate the genetic basis of tooth development and replacement. We find that two derived freshwater stickleback populations have both convergently evolved more ventral pharyngeal teeth through heritable genetic changes. In both populations, evolved tooth gain manifests late in development. Using pulse-chase vital dye labeling to mark newly forming teeth in adult fish, we find that both high-toothed freshwater populations have accelerated tooth replacement rates relative to low-toothed ancestral marine fish. Despite the similar evolved phenotype of more teeth and an accelerated adult replacement rate, the timing of tooth number divergence and the spatial patterns of newly formed adult teeth are different in the two populations, suggesting distinct developmental mechanisms. Using genome-wide linkage mapping in marine-freshwater F2 genetic crosses, we find that the genetic basis of evolved tooth gain in the two freshwater populations is largely distinct. Together, our results support a model whereby increased tooth number and an accelerated tooth replacement rate have evolved convergently in two independently derived freshwater stickleback populations using largely distinct developmental and genetic mechanisms. PMID:26062935
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.
Fessler, J A; Booth, S D
1999-01-01
Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Exploring service delivery in occupational therapy: The use of convergent interviewing.
van Biljon, Hester; du Toit, Sanetta H J; Masango, July; Casteleijn, Daleen
2017-01-01
Occupational therapy clinicians working in South Africa's public healthcare had views on what patients thought about their vocational rehabilitation services that were based on anecdotal evidence. However evidence-based practice requires more than that. Reliable information is important in patient-centred practice and in the assessment of service quality. Clinical occupational therapists used the convergent interviewing technique to explore patients' views of the vocational rehabilitation services on offer in public hospitals. An Action Learning Action Research (ALAR) approach was used to explore the vocational rehabilitation services occupational therapy clinicians provided over a two week period in three settings. The majority (96%) of patients interviewed were not aware that occupational therapists offered vocational rehabilitation services. The convergent interview technique allowed continued unrestricted discussion of their vocational rehabilitation concerns and provided evidence that patients had significant concerns about work. Critical reflection on the interview experience and technique indicated that therapists were in favour of using convergent interviewing to obtain their patients views about the services offered. Therapists found the convergent interview technique easy to apply in clinical practice. Establishing patients' views of a clinical service have multiple values. However it is meaningless unless clinicians use the knowledge to improve service delivery to the patients who provided the views. Convergent interviewing was a valuable technique for occupational therapy clinicians to incorporate patients' views of their services into service development.
Iterative methods used in overlap astrometric reduction techniques do not always converge
NASA Astrophysics Data System (ADS)
Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.
1993-04-01
In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.
Numerical Modeling of a Vortex Stabilized Arcjet. Ph.D. Thesis, 1991 Final Report
NASA Technical Reports Server (NTRS)
Pawlas, Gary E.
1992-01-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased. The technique was used to predict the flow through a typical arcjet thruster geometry. Results indicate swirl and viscosity play an important role in the complex geometry of an arcjet.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary E.
1992-11-01
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity, and swirling flow. Arcjet geometries are large area ratio converging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown that a swirl or circumferential velocity component stabilizes a constricted arc. This dissertation describes the equations governing flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is employed in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and radial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split, and Gauss-Seidel line relaxation is used to accelerate convergence. Converging-diverging nozzles with exit-to-throat area ratios up to 100:1 and annular nozzles were examined. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures. As the level of swirl and viscosity in the flowfield increased the mass flow rate and thrust decreased.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
NASA Technical Reports Server (NTRS)
Edwards, Jack R.; Mcrae, D. Scott
1991-01-01
An efficient method for computing two-dimensional compressible Navier-Stokes flow fields is presented. The solution algorithm is a fully-implicit approximate factorization technique based on an unsymmetric line Gauss-Seidel splitting of the equation system Jacobian matrix. Convergence characteristics are improved by the addition of acceleration techniques based on Shamanskii's method for nonlinear equations and Broyden's quasi-Newton update. Characteristic-based differencing of the equations is provided by means of Van Leer's flux vector splitting. In this investigation, emphasis is placed on the fast and accurate computation of shock-wave-boundary layer interactions with and without slot suction effects. In the latter context, a set of numerical boundary conditions for simulating the transpiration flow in an open slot is devised. Both laminar and turbulent cases are considered, with turbulent closure provided by a modified Cebeci-Smith algebraic model. Comparisons with computational and experimental data sets are presented for a variety of interactions, and a fully-coupled simulation of a plenum chamber/inlet flowfield with shock interaction and suction is also shown and discussed.
GPU acceleration of Eulerian-Lagrangian particle-laden turbulent flow simulations
NASA Astrophysics Data System (ADS)
Richter, David; Sweet, James; Thain, Douglas
2017-11-01
The Lagrangian point-particle approximation is a popular numerical technique for representing dispersed phases whose properties can substantially deviate from the local fluid. In many cases, particularly in the limit of one-way coupled systems, large numbers of particles are desired; this may be either because many physical particles are present (e.g. LES of an entire cloud), or because the use of many particles increases statistical convergence (e.g. high-order statistics). Solving the trajectories of very large numbers of particles can be problematic in traditional MPI implementations, however, and this study reports the benefits of using graphical processing units (GPUs) to integrate the particle equations of motion while preserving the original MPI version of the Eulerian flow solver. It is found that GPU acceleration becomes cost effective around one million particles, and performance enhancements of up to 15x can be achieved when O(108) particles are computed on the GPU rather than the CPU cluster. Optimizations and limitations will be discussed, as will prospects for expanding to two- and four-way coupled systems. ONR Grant No. N00014-16-1-2472.
Cosmological solutions in spatially curved universes with adiabatic particle production
NASA Astrophysics Data System (ADS)
Aresté Saló, Llibert; de Haro, Jaume
2017-03-01
We perform a qualitative and thermodynamic study of two models when one takes into account adiabatic particle production. In the first one, there is a constant particle production rate, which leads to solutions depicting the current cosmic acceleration but without inflation. The other one has solutions that unify the early and late time acceleration. These solutions converge asymptotically to the thermal equilibrium.
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
Toth, Alex; Ellis, J. Austin; Evans, Tom; ...
2017-10-26
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Ellis, J. Austin; Evans, Tom
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
NASA Technical Reports Server (NTRS)
Angelaki, D. E.; Dickman, J. D.
2000-01-01
Spatiotemporal convergence and two-dimensional (2-D) neural tuning have been proposed as a major neural mechanism in the signal processing of linear acceleration. To examine this hypothesis, we studied the firing properties of primary otolith afferents and central otolith neurons that respond exclusively to horizontal linear accelerations of the head (0.16-10 Hz) in alert rhesus monkeys. Unlike primary afferents, the majority of central otolith neurons exhibited 2-D spatial tuning to linear acceleration. As a result, central otolith dynamics vary as a function of movement direction. During movement along the maximum sensitivity direction, the dynamics of all central otolith neurons differed significantly from those observed for the primary afferent population. Specifically at low frequencies (=0.5 Hz), the firing rate of the majority of central otolith neurons peaked in phase with linear velocity, in contrast to primary afferents that peaked in phase with linear acceleration. At least three different groups of central response dynamics were described according to the properties observed for motion along the maximum sensitivity direction. "High-pass" neurons exhibited increasing gains and phase values as a function of frequency. "Flat" neurons were characterized by relatively flat gains and constant phase lags (approximately 20-55 degrees ). A few neurons ("low-pass") were characterized by decreasing gain and phase as a function of frequency. The response dynamics of central otolith neurons suggest that the approximately 90 degrees phase lags observed at low frequencies are not the result of a neural integration but rather the effect of nonminimum phase behavior, which could arise at least partly through spatiotemporal convergence. Neither afferent nor central otolith neurons discriminated between gravitational and inertial components of linear acceleration. Thus response sensitivity was indistinguishable during 0.5-Hz pitch oscillations and fore-aft movements. The fact that otolith-only central neurons with "high-pass" filter properties exhibit semicircular canal-like dynamics during head tilts might have important consequences for the conclusions of previous studies of sensory convergence and sensorimotor transformations in central vestibular neurons.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.
NASA Technical Reports Server (NTRS)
Aston, G. (Inventor)
1981-01-01
A system is described that combines geometrical and electrostatic focusing to provide high ion extraction efficiency and good focusing of an accelerated ion beam. The apparatus includes a pair of curved extraction grids with multiple pairs of aligned holes positioned to direct a group of beamlets along converging paths. The extraction grids are closely spaced and maintained at a moderate potential to efficiently extract beamlets of ions and allow them to combine into a single beam. An accelerator electrode device downstream from the extraction grids is at a much lower potential than the grids to accelerate the combined beam. The application of the system to ion implantation is mentioned.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
A Conforming Multigrid Method for the Pure Traction Problem of Linear Elasticity: Mixed Formulation
NASA Technical Reports Server (NTRS)
Lee, Chang-Ock
1996-01-01
A multigrid method using conforming P-1 finite element is developed for the two-dimensional pure traction boundary value problem of linear elasticity. The convergence is uniform even as the material becomes nearly incompressible. A heuristic argument for acceleration of the multigrid method is discussed as well. Numerical results with and without this acceleration as well as performance estimates on a parallel computer are included.
Anderson, D L
1975-03-21
The concept of a stressed elastic lithospheric plate riding on a viscous asthenosphere is used to calculate the recurrence interval of great earthquakes at convergent plate boundaries, the separation of decoupling and lithospheric earthquakes, and the migration pattern of large earthquakes along an arc. It is proposed that plate motions accelerate after great decoupling earthquakes and that most of the observed plate motions occur during short periods of time, separated by periods of relative quiescence.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Ojeda-May, Pedro; Nam, Kwangho
2017-08-08
The strategy and implementation of scalable and efficient semiempirical (SE) QM/MM methods in CHARMM are described. The serial version of the code was first profiled to identify routines that required parallelization. Afterward, the code was parallelized and accelerated with three approaches. The first approach was the parallelization of the entire QM/MM routines, including the Fock matrix diagonalization routines, using the CHARMM message passage interface (MPI) machinery. In the second approach, two different self-consistent field (SCF) energy convergence accelerators were implemented using density and Fock matrices as targets for their extrapolations in the SCF procedure. In the third approach, the entire QM/MM and MM energy routines were accelerated by implementing the hybrid MPI/open multiprocessing (OpenMP) model in which both the task- and loop-level parallelization strategies were adopted to balance loads between different OpenMP threads. The present implementation was tested on two solvated enzyme systems (including <100 QM atoms) and an S N 2 symmetric reaction in water. The MPI version exceeded existing SE QM methods in CHARMM, which include the SCC-DFTB and SQUANTUM methods, by at least 4-fold. The use of SCF convergence accelerators further accelerated the code by ∼12-35% depending on the size of the QM region and the number of CPU cores used. Although the MPI version displayed good scalability, the performance was diminished for large numbers of MPI processes due to the overhead associated with MPI communications between nodes. This issue was partially overcome by the hybrid MPI/OpenMP approach which displayed a better scalability for a larger number of CPU cores (up to 64 CPUs in the tested systems).
Numerical simulation and experimental investigation about internal and external flows†
NASA Astrophysics Data System (ADS)
Wang, Tao; Yang, Guowei; Huang, Guojun; Zhou, Liandi
2006-06-01
In this paper, TASCflow3D is used to solve inner and outer 3D viscous incompressible turbulent flow (Re=5.6×106) around axisymmetric body with duct. The governing equation is a RANS equation with standard k ɛ turbulence model. The discrete method used is a finite volume method based on the finite element approach. In this method, the description of geometry is very flexible and at the same time important conservative properties are retained. The multi-block and algebraic multi-grid techniques are used for the convergence acceleration. Agreement between experimental results and calculation is good. It indicates that this novel approach can be used to simulate complex flow such as the interaction between rotor and stator or propulsion systems containing tip clearance and cavitation.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Reddy, S Srikanth; Revathi, Kakkirala; Reddy, S Kranthikumar
2013-01-01
Conventional casting technique is time consuming when compared to accelerated casting technique. In this study, marginal accuracy of castings fabricated using accelerated and conventional casting technique was compared. 20 wax patterns were fabricated and the marginal discrepancy between the die and patterns were measured using Optical stereomicroscope. Ten wax patterns were used for Conventional casting and the rest for Accelerated casting. A Nickel-Chromium alloy was used for the casting. The castings were measured for marginal discrepancies and compared. Castings fabricated using Conventional casting technique showed less vertical marginal discrepancy than the castings fabricated by Accelerated casting technique. The values were statistically highly significant. Conventional casting technique produced better marginal accuracy when compared to Accelerated casting. The vertical marginal discrepancy produced by the Accelerated casting technique was well within the maximum clinical tolerance limits. Accelerated casting technique can be used to save lab time to fabricate clinical crowns with acceptable vertical marginal discrepancy.
15 CFR Supplement No. 8 to Part 742 - Self-Classification Report for Encryption Items
Code of Federal Regulations, 2011 CFR
2011-01-01
... forensics (v) Cryptographic accelerator (vi) Data backup and recovery (vii) Database (viii) Disk/drive... (MAN) (xxii) Modem (xxiii) Network convergence or infrastructure n.e.s. (xxiv) Network forensics (xxv...
15 CFR Supplement No. 8 to Part 742 - Self-Classification Report for Encryption Items
Code of Federal Regulations, 2014 CFR
2014-01-01
... forensics (v) Cryptographic accelerator (vi) Data backup and recovery (vii) Database (viii) Disk/drive... (MAN) (xxii) Modem (xxiii) Network convergence or infrastructure n.e.s. (xxiv) Network forensics (xxv...
15 CFR Supplement No. 8 to Part 742 - Self-Classification Report for Encryption Items
Code of Federal Regulations, 2013 CFR
2013-01-01
... forensics (v) Cryptographic accelerator (vi) Data backup and recovery (vii) Database (viii) Disk/drive... (MAN) (xxii) Modem (xxiii) Network convergence or infrastructure n.e.s. (xxiv) Network forensics (xxv...
15 CFR Supplement No. 8 to Part 742 - Self-Classification Report for Encryption Items
Code of Federal Regulations, 2012 CFR
2012-01-01
... forensics (v) Cryptographic accelerator (vi) Data backup and recovery (vii) Database (viii) Disk/drive... (MAN) (xxii) Modem (xxiii) Network convergence or infrastructure n.e.s. (xxiv) Network forensics (xxv...
A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
White, J. A.; Morrison, J. H.
1999-01-01
A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.
Kermajani, Hamidreza; Gomez, Carles
2014-01-01
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154
Kermajani, Hamidreza; Gomez, Carles
2014-07-07
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmanov, Anvar, E-mail: agilmano@umn.edu; Le, Trung Bao, E-mail: lebao002@umn.edu; Sotiropoulos, Fotis, E-mail: fotis@umn.edu
We present a new numerical methodology for simulating fluid–structure interaction (FSI) problems involving thin flexible bodies in an incompressible fluid. The FSI algorithm uses the Dirichlet–Neumann partitioning technique. The curvilinear immersed boundary method (CURVIB) is coupled with a rotation-free finite element (FE) model for thin shells enabling the efficient simulation of FSI problems with arbitrarily large deformation. Turbulent flow problems are handled using large-eddy simulation with the dynamic Smagorinsky model in conjunction with a wall model to reconstruct boundary conditions near immersed boundaries. The CURVIB and FE solvers are coupled together on the flexible solid–fluid interfaces where the structural nodalmore » positions, displacements, velocities and loads are calculated and exchanged between the two solvers. Loose and strong coupling FSI schemes are employed enhanced by the Aitken acceleration technique to ensure robust coupling and fast convergence especially for low mass ratio problems. The coupled CURVIB-FE-FSI method is validated by applying it to simulate two FSI problems involving thin flexible structures: 1) vortex-induced vibrations of a cantilever mounted in the wake of a square cylinder at different mass ratios and at low Reynolds number; and 2) the more challenging high Reynolds number problem involving the oscillation of an inverted elastic flag. For both cases the computed results are in excellent agreement with previous numerical simulations and/or experiential measurements. Grid convergence tests/studies are carried out for both the cantilever and inverted flag problems, which show that the CURVIB-FE-FSI method provides their convergence. Finally, the capability of the new methodology in simulations of complex cardiovascular flows is demonstrated by applying it to simulate the FSI of a tri-leaflet, prosthetic heart valve in an anatomic aorta and under physiologic pulsatile conditions.« less
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
A fast multigrid-based electromagnetic eigensolver for curved metal boundaries on the Yee mesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: carl.bauer@colorado.edu; Werner, Gregory R.; Cary, John R.
For embedded boundary electromagnetics using the Dey–Mittra (Dey and Mittra, 1997) [1] algorithm, a special grad–div matrix constructed in this work allows use of multigrid methods for efficient inversion of Maxwell’s curl–curl matrix. Efficient curl–curl inversions are demonstrated within a shift-and-invert Krylov-subspace eigensolver (open-sourced at ([ofortt]https://github.com/bauerca/maxwell[cfortt])) on the spherical cavity and the 9-cell TESLA superconducting accelerator cavity. The accuracy of the Dey–Mittra algorithm is also examined: frequencies converge with second-order error, and surface fields are found to converge with nearly second-order error. In agreement with previous work (Nieter et al., 2009) [2], neglecting some boundary-cut cell faces (as is requiredmore » in the time domain for numerical stability) reduces frequency convergence to first-order and surface-field convergence to zeroth-order (i.e. surface fields do not converge). Additionally and importantly, neglecting faces can reduce accuracy by an order of magnitude at low resolutions.« less
A linear recurrent kernel online learning algorithm with sparse updates.
Fan, Haijin; Song, Qing
2014-02-01
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Fay, John F.
1990-01-01
A calculation is made of the stability of various relaxation schemes for the numerical solution of partial differential equations. A multigrid acceleration method is introduced, and its effects on stability are explored. A detailed stability analysis of a simple case is carried out and verified by numerical experiment. It is shown that the use of multigrids can speed convergence by several orders of magnitude without adversely affecting stability.
NASA Technical Reports Server (NTRS)
Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.
2011-01-01
A rich body of vibroacoustic test data was recently generated at Marshall Space Flight Center for a curved orthogrid panel typical of launch vehicle skin structures. Several test article configurations were produced by adding component equipment of differing weights to the flight-like vehicle panel. The test data were used to anchor computational predictions of a variety of spatially distributed responses including acceleration, strain and component interface force. Transfer functions relating the responses to the input pressure field were generated from finite element based modal solutions and test-derived damping estimates. A diffuse acoustic field model was employed to describe the assumed correlation of phased input sound pressures across the energized panel. This application demonstrates the ability to quickly and accurately predict a variety of responses to acoustically energized skin panels with mounted components. Favorable comparisons between the measured and predicted responses were established. The validated models were used to examine vibration response sensitivities to relevant modeling parameters such as pressure patch density, mesh density, weight of the mounted component and model form. Convergence metrics include spectral densities and cumulative root-mean squared (RMS) functions for acceleration, velocity, displacement, strain and interface force. Minimum frequencies for response convergence were established as well as recommendations for modeling techniques, particularly in the early stages of a component design when accurate structural vibration requirements are needed relatively quickly. The results were compared with long-established guidelines for modeling accuracy of component-loaded panels. A theoretical basis for the Response/Pressure Transfer Function (RPTF) approach provides insight into trends observed in the response predictions and confirmed in the test data. The software modules developed for the RPTF method can be easily adapted for quick replacement of the diffuse acoustic field with other pressure field models; for example a turbulent boundary layer (TBL) model suitable for vehicle ascent. Wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this type of environment. Finally, component vibration environments for design were developed from the measured and predicted responses and compared with those derived from traditional techniques such as Barrett scaling methods for unloaded and component-loaded panels.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
Effects of Coulomb collisions on cyclotron maser and plasma wave growth in magnetic loops
NASA Technical Reports Server (NTRS)
Hamilton, Russell J.; Petrosian, Vahe
1990-01-01
The evolution of nonthermal electrons accelerated in magnetic loops is determined by solving the kinetic equation, including magnetic field convergence and Coulomb collisions in order to determine the effects of these interactions on the induced cyclotron maser and plasma wave growth. It is found that the growth rates are larger and the possibility of cyclotron maser action is stronger for smaller loop column density, for larger magnetic field convergence, for a more isotropic injected electron pitch angle distribution, and for more impulsive acceleration. For modest values of the column density in the coronal portion of a flaring loop, the growth rates of instabilities are significantly reduced, and the reduction is much larger for the cyclotron modes than for the plasma wave modes. The rapid decrease in the growth rates with increasing loop column density suggests that, in flare loops when such phenomena occur, the densities are lower than commonly accepted.
Accurate and efficient spin integration for particle accelerators
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; ...
2015-02-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations.We evaluate their performance and accuracy in quantitative detail for individual elements as well as formore » the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.« less
Tourism English Teaching Techniques Converged from Two Different Angles.
ERIC Educational Resources Information Center
Seong, Myeong-Hee
2001-01-01
Provides techniques converged from two different angles (learners and tourism English features) for effective tourism English teaching in a junior college in Korea. Used a questionnaire, needs analysis, an instrument for measuring learners' strategies for oral communication, a small-scale classroom study for learners' preferred teaching…
Application of the Convergence Technique to Basic Studies of the Reading Process. Final Report.
ERIC Educational Resources Information Center
Gephart, William J.
This study covers a program of research on problems in the area of reading undertaken and supported by the U. S. Office of Education. Due to the effectiveness of the Convergence Technique in the planning and management of complex programs of bio-medical research, this project is undertaken to develop plans for the application of this technique in…
ERIC Educational Resources Information Center
Munoz-Organero, Mario; Ramirez, Gustavo A.; Merino, Pedro Munoz; Kloos, Carlos Delgado
2010-01-01
The use of swarm intelligence techniques in e-learning scenarios provides a way to combine simple interactions of individual students to solve a more complex problem. After getting some data from the interactions of the first students with a central system, the use of these techniques converges to a solution that the rest of the students can…
Evaluating bump control techniques through convergence monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campoli, A.A.
1987-07-01
A coal mine bump is the violent failure of a pillar or pillars due to overstress. Retreat coal mining concentrates stresses on the pillars directly outby gob areas, and the situation becomes critical when mining a coalbed encased in rigid associated strata. Bump control techniques employed by the Olga Mine, McDowell County, WV, were evaluated through convergence monitoring in a Bureau of Mines study. Olga uses a novel pillar splitting mining method to extract 55-ft by 70-ft chain pillars, under 1,100 to 1,550 ft of overburden. Three rows of pillars are mined simultaneously to soften the pillar line and reducemore » strain energy storage capacity. Localized stress reduction (destressing) techniques, auger drilling and shot firing, induced approximately 0.1 in. of roof-to-floor convergence in ''high'' -stress pillars near the gob line. Auger drilling of a ''low''-stress pillar located between two barrier pillars produced no convergence effects.« less
Convergence of Newton's method for a single real equation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
Sainath, Kamalesh; Teixeira, Fernando L; Donderici, Burkay
2014-01-01
We develop a general-purpose formulation, based on two-dimensional spectral integrals, for computing electromagnetic fields produced by arbitrarily oriented dipoles in planar-stratified environments, where each layer may exhibit arbitrary and independent anisotropy in both its (complex) permittivity and permeability tensors. Among the salient features of our formulation are (i) computation of eigenmodes (characteristic plane waves) supported in arbitrarily anisotropic media in a numerically robust fashion, (ii) implementation of an hp-adaptive refinement for the numerical integration to evaluate the radiation and weakly evanescent spectra contributions, and (iii) development of an adaptive extension of an integral convergence acceleration technique to compute the strongly evanescent spectrum contribution. While other semianalytic techniques exist to solve this problem, none have full applicability to media exhibiting arbitrary double anisotropies in each layer, where one must account for the whole range of possible phenomena (e.g., mode coupling at interfaces and nonreciprocal mode propagation). Brute-force numerical methods can tackle this problem but only at a much higher computational cost. The present formulation provides an efficient and robust technique for field computation in arbitrary planar-stratified environments. We demonstrate the formulation for a number of problems related to geophysical exploration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashinov, Aleksei V; Gonoskov, Arkady A; Kim, A V
2013-04-30
A comparative analysis is performed of the electron emission characteristics as the electrons move in laser fields with ultra-relativistic intensity and different configurations corresponding to a plane or tightly focused wave. For a plane travelling wave, analytical expressions are derived for the emission characteristics, and it is shown that the angular distribution of the radiation intensity changes qualitatively even when the wave intensity is much less than that in the case of the radiation-dominated regime. An important conclusion is drawn that the electrons in a travelling wave tend to synchronised motion under the radiation reaction force. The characteristic features ofmore » the motion of electrons are found in a converging dipole wave, associated with the curvature of the phase front and nonuniformity of the field distribution. The values of the maximum achievable longitudinal momenta of electrons accelerated to the centre, as well as their distribution function are determined. The existence of quasi-periodic trajectories near the focal region of the dipole wave is shown, and the characteristics of the emission of both accelerated and oscillating electrons are analysed. (extreme light fields and their applications)« less
NASA Technical Reports Server (NTRS)
Kutepov, A. A.; Feofilov, A. G.; Manuilova, R. O.; Yankovsky, V. A.; Rezac, L.; Pesnell, W. D.; Goldberg, R. A.
2008-01-01
The Accelerated Lambda Iteration (ALI) technique was developed in stellar astrophysics at the beginning of 1990s for solving the non-LTE radiative transfer problem in atomic lines and multiplets in stellar atmospheres. It was later successfully applied to modeling the non-LTE emissions and radiative cooling/heating in the vibrational-rotational bands of molecules in planetary atmospheres. Similar to the standard lambda iterations ALI operates with the matrices of minimal dimension. However, it provides higher convergence rate and stability due to removing from the iterating process the photons trapped in the optically thick line cores. In the current ALI-ARMS (ALI for Atmospheric Radiation and Molecular Spectra) code version additional acceleration of calculations is provided by utilizing the opacity distribution function (ODF) approach and "decoupling". The former allows replacing the band branches by single lines of special shape, whereas the latter treats non-linearity caused by strong near-resonant vibration-vibrational level coupling without additional linearizing the statistical equilibrium equations. Latest code application for the non-LTE diagnostics of the molecular band emissions of Earth's and Martian atmospheres as well as for the non-LTE IR cooling/heating calculations are discussed.
Ultrafast superpixel segmentation of large 3D medical datasets
NASA Astrophysics Data System (ADS)
Leblond, Antoine; Kauffmann, Claude
2016-03-01
Even with recent hardware improvements, superpixel segmentation of large 3D medical images at interactive speed (<500 ms) remains a challenge. We will describe methods to achieve such performances using a GPU based hybrid framework implementing wavefront propagation and cellular automata resolution. Tasks will be scheduled in blocks (work units) using a wavefront propagation strategy, therefore allowing sparse scheduling. Because work units has been designed as spatially cohesive, the fast Thread Group Shared Memory can be used and reused through a Gauss-Seidel like acceleration. The work unit partitioning scheme will however vary on odd- and even-numbered iterations to reduce convergence barriers. Synchronization will be ensured by an 8-step 3D variant of the traditional Red Black Ordering scheme. An attack model and early termination will also be described and implemented as additional acceleration techniques. Using our hybrid framework and typical operating parameters, we were able to compute the superpixels of a high-resolution 512x512x512 aortic angioCT scan in 283 ms using a AMD R9 290X GPU. We achieved a 22.3X speed-up factor compared to the published reference GPU implementation.
Multistage Schemes with Multigrid for Euler and Navier-Strokes Equations: Components and Analysis
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1997-01-01
A class of explicit multistage time-stepping schemes with centered spatial differencing and multigrids are considered for the compressible Euler and Navier-Stokes equations. These schemes are the basis for a family of computer programs (flow codes with multigrid (FLOMG) series) currently used to solve a wide range of fluid dynamics problems, including internal and external flows. In this paper, the components of these multistage time-stepping schemes are defined, discussed, and in many cases analyzed to provide additional insight into their behavior. Special emphasis is given to numerical dissipation, stability of Runge-Kutta schemes, and the convergence acceleration techniques of multigrid and implicit residual smoothing. Both the Baldwin and Lomax algebraic equilibrium model and the Johnson and King one-half equation nonequilibrium model are used to establish turbulence closure. Implementation of these models is described.
Prediction of Business Jet Airloads Using The Overflow Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Bounajem, Elias; Buning, Pieter G.
2001-01-01
The objective of this work is to evaluate the application of Navier-Stokes computational fluid dynamics technology, for the purpose of predicting off-design condition airloads on a business jet configuration in the transonic regime. The NASA Navier-Stokes flow solver OVERFLOW with Chimera overset grid capability, availability of several numerical schemes and convergence acceleration techniques was selected for this work. A set of scripts which have been compiled to reduce the time required for the grid generation process are described. Several turbulence models are evaluated in the presence of separated flow regions on the wing. Computed results are compared to available wind tunnel data for two Mach numbers and a range of angles-of-attack. Comparisons of wing surface pressure from numerical simulation and wind tunnel measurements show good agreement up to fairly high angles-of-attack.
NASA Technical Reports Server (NTRS)
Sakai, Jun-Ichi
1992-01-01
We present a model for high-energy solar flares to explain prompt proton and electron acceleration, which occurs around moving X-point magnetic field during the implosion phase of the current sheet. We derive the electromagnetic fields during the strong implosion phase of the current sheets, which is driven by the converging flow derived from the magnetohydrodynamic equations. It is shown that both protons and electrons can be promptly (within 1 second) accelerated to approximately 70 MeV and approximately 200 MeV, respectively. This acceleration mechanism can be applicable for the impulsive phase of the gradual gamma ray and proton flares (gradual GR/P flare), which have been called two-ribbon flares.
3D shape reconstruction of specular surfaces by using phase measuring deflectometry
NASA Astrophysics Data System (ADS)
Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan
2016-10-01
The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.
An Upwind Multigrid Algorithm for Calculating Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bonhaus, Daryl L.
1993-01-01
An algorithm is described that calculates inviscid, laminar, and turbulent flows on triangular meshes with an upwind discretization. A brief description of the base solver and the multigrid implementation is given, followed by results that consist mainly of convergence rates for inviscid and viscous flows over a NACA four-digit airfoil section. The results show that multigrid does accelerate convergence when the same relaxation parameters that yield good single-grid performance are used; however, larger gains in performance can be realized by doing less work in the relaxation scheme.
Model-independent particle accelerator tuning
Scheinker, Alexander; Pang, Xiaoying; Rybarcyk, Larry
2013-10-21
We present a new model-independent dynamic feedback technique, rotation rate tuning, for automatically and simultaneously tuning coupled components of uncertain, complex systems. The main advantages of the method are: 1) It has the ability to handle unknown, time-varying systems, 2) It gives known bounds on parameter update rates, 3) We give an analytic proof of its convergence and its stability, and 4) It has a simple digital implementation through a control system such as the Experimental Physics and Industrial Control System (EPICS). Because this technique is model independent it may be useful as a real-time, in-hardware, feedback-based optimization scheme formore » uncertain and time-varying systems. In particular, it is robust enough to handle uncertainty due to coupling, thermal cycling, misalignments, and manufacturing imperfections. As a result, it may be used as a fine-tuning supplement for existing accelerator tuning/control schemes. We present multi-particle simulation results demonstrating the scheme’s ability to simultaneously adaptively adjust the set points of twenty two quadrupole magnets and two RF buncher cavities in the Los Alamos Neutron Science Center Linear Accelerator’s transport region, while the beam properties and RF phase shift are continuously varying. The tuning is based only on beam current readings, without knowledge of particle dynamics. We also present an outline of how to implement this general scheme in software for optimization, and in hardware for feedback-based control/tuning, for a wide range of systems.« less
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2014-02-28
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2015-01-01
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230
Algorithms for accelerated convergence of adaptive PCA.
Chatterjee, C; Kang, Z; Roychowdhury, V P
2000-01-01
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.
Numerical modeling of a vortex stabilized arcjet
NASA Astrophysics Data System (ADS)
Pawlas, Gary Edward
Arcjet thrusters are being actively considered for use in Earth orbit maneuvering applications. Satellite station-keeping is an example of a maneuvering application requiring the low thrust, high specific impulse of an arcjet. Experimental studies are currently the chief means of determining an optimal thruster configuration. Earlier numerical studies have failed to include all of the effects found in typical arcjets including complex geometries, viscosity and swirling flow. Arcjet geometries are large area ratio converging-diverging nozzles with centerbodies in the subsonic portion of the nozzle. The nozzle walls serve as the anode while the centerbody functions as the cathode. Viscous effects are important because the Reynolds number, based on the throat radius, is typically less than 1,000. Experimental studies have shown a swirl or circumferential velocity component stabilizes a constricted arc. The equations are described which governs the flow through a constricted arcjet thruster. An assumption that the flowfield is in local thermodynamic equilibrium leads to a single fluid plasma temperature model. An order of magnitude analysis reveals the governing fluid mechanics equations are uncoupled from the electromagnetic field equations. A numerical method is developed to solve the governing fluid mechanics equations, the Thin Layer Navier-Stokes equations. A coordinate transformation is used in deriving the governing equations to simplify the application of boundary conditions in complex geometries. An axisymmetric formulation is employed to include the swirl velocity component as well as the axial and redial velocity components. The numerical method is an implicit finite-volume technique and allows for large time steps to reach a converged steady-state solution. The inviscid fluxes are flux-split and Gauss-Seidel line relaxation is used to accelerate convergence. 'Converging diverging' nozzles with exit-to-throat area ratios up to 100:1 and annual nozzles were examined. Comparisons with experimental data and previous numerical results were in excellent agreement. Quantities examined included Mach number and static wall pressure distributions, and oblique shock structures.
An Enhanced Differential Evolution Algorithm Based on Multiple Mutation Strategies.
Xiang, Wan-li; Meng, Xue-lei; An, Mei-qing; Li, Yin-zhen; Gao, Ming-xia
2015-01-01
Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, opposition-based learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/pbest/bin/1 for the sake of accelerating standard DE and preventing DE from clustering around the global best individual, as well as a perturbation scheme for further avoiding premature convergence, are integrated. In addition, we also introduce two linear time-varying functions, which are used to decide which solution search equation is chosen at the phases of mutation and perturbation, respectively. Experimental results tested on twenty-five benchmark functions show that EDE is far better than the standard DE. In further comparisons, EDE is compared with other five state-of-the-art approaches and related results show that EDE is still superior to or at least equal to these methods on most of benchmark functions.
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
A simplex method for the orbit determination of maneuvering satellites
NASA Astrophysics Data System (ADS)
Chen, JianRong; Li, JunFeng; Wang, XiJing; Zhu, Jun; Wang, DanNa
2018-02-01
A simplex method of orbit determination (SMOD) is presented to solve the problem of orbit determination for maneuvering satellites subject to small and continuous thrust. The objective function is established as the sum of the nth powers of the observation errors based on global positioning satellite (GPS) data. The convergence behavior of the proposed method is analyzed using a range of initial orbital parameter errors and n values to ensure the rapid and accurate convergence of the SMOD. For an uncontrolled satellite, the orbit obtained by the SMOD provides a position error compared with GPS data that is commensurate with that obtained by the least squares technique. For low Earth orbit satellite control, the precision of the acceleration produced by a small pulse thrust is less than 0.1% compared with the calibrated value. The orbit obtained by the SMOD is also compared with weak GPS data for a geostationary Earth orbit satellite over several days. The results show that the position accuracy is within 12.0 m. The working efficiency of the electric propulsion is about 67% compared with the designed value. The analyses provide the guidance for subsequent satellite control. The method is suitable for orbit determination of maneuvering satellites subject to small and continuous thrust.
NASA Astrophysics Data System (ADS)
Mönkölä, Sanna
2013-06-01
This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.
NASA Astrophysics Data System (ADS)
Zhang, Chuang; Guo, Zhaoli; Chen, Songze
2017-12-01
An implicit kinetic scheme is proposed to solve the stationary phonon Boltzmann transport equation (BTE) for multiscale heat transfer problem. Compared to the conventional discrete ordinate method, the present method employs a macroscopic equation to accelerate the convergence in the diffusive regime. The macroscopic equation can be taken as a moment equation for phonon BTE. The heat flux in the macroscopic equation is evaluated from the nonequilibrium distribution function in the BTE, while the equilibrium state in BTE is determined by the macroscopic equation. These two processes exchange information from different scales, such that the method is applicable to the problems with a wide range of Knudsen numbers. Implicit discretization is implemented to solve both the macroscopic equation and the BTE. In addition, a memory reduction technique, which is originally developed for the stationary kinetic equation, is also extended to phonon BTE. Numerical comparisons show that the present scheme can predict reasonable results both in ballistic and diffusive regimes with high efficiency, while the memory requirement is on the same order as solving the Fourier law of heat conduction. The excellent agreement with benchmark and the rapid converging history prove that the proposed macro-micro coupling is a feasible solution to multiscale heat transfer problems.
Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection
NASA Astrophysics Data System (ADS)
de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.
2001-02-01
Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.
Solarin, Sakiru Adebola; Gil-Alana, Luis Alberiko; Al-Mulali, Usama
2018-04-13
In this article, we have examined the hypothesis of convergence of renewable energy consumption in 27 OECD countries. However, instead of relying on classical techniques, which are based on the dichotomy between stationarity I(0) and nonstationarity I(1), we consider a more flexible approach based on fractional integration. We employ both parametric and semiparametric techniques. Using parametric methods, evidence of convergence is found in the cases of Mexico, Switzerland and Sweden along with the USA, Portugal, the Czech Republic, South Korea and Spain, and employing semiparametric approaches, we found evidence of convergence in all these eight countries along with Australia, France, Japan, Greece, Italy and Poland. For the remaining 13 countries, even though the orders of integration of the series are smaller than one in all cases except Germany, the confidence intervals are so wide that we cannot reject the hypothesis of unit roots thus not finding support for the hypothesis of convergence.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Convergence Acceleration of Runge-Kutta Schemes for Solving the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, Roy C., Jr.; Turkel, Eli; Rossow, C.-C.
2007-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 can be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. This RK/implicit scheme is used as a smoother for multigrid. Fourier analysis is applied to determine damping properties. Numerical dissipation operators based on the Roe scheme, a matrix dissipation, and the CUSP scheme are considered in evaluating the RK/implicit scheme. In addition, the effect of the number of RK stages is examined. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. Turbulent flows over an airfoil and wing at subsonic and transonic conditions are computed. The effects of the cell aspect ratio on convergence are investigated for Reynolds numbers between 5:7 x 10(exp 6) and 100 x 10(exp 6). It is demonstrated that the implicit preconditioner can reduce the computational time of a well-tuned standard RK scheme by a factor between four and ten.
A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)
2013-01-22
However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p
The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems
NASA Technical Reports Server (NTRS)
Eliasson, Peter; Engquist, Bjoern
1996-01-01
The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.
Convergence Analysis of the Graph Allen-Cahn Scheme
2016-02-01
CONVERGENCE ANALYSIS OF THE GRAPH ALLEN-CAHN SCHEME ∗ XIYANG LUO† AND ANDREA L. BERTOZZI† Abstract. Graph partitioning problems have a wide range of...optimization, convergence and monotonicity are shown for a class of schemes under a graph-independent timestep restriction. We also analyze the effects of...spectral truncation, a common technique used to save computational cost. Convergence of the scheme with spectral truncation is also proved under a
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Newlands, Shawn D; Abbatematteo, Ben; Wei, Min; Carney, Laurel H; Luan, Hongge
2018-01-01
Roughly half of all vestibular nucleus neurons without eye movement sensitivity respond to both angular rotation and linear acceleration. Linear acceleration signals arise from otolith organs, and rotation signals arise from semicircular canals. In the vestibular nerve, these signals are carried by different afferents. Vestibular nucleus neurons represent the first point of convergence for these distinct sensory signals. This study systematically evaluated how rotational and translational signals interact in single neurons in the vestibular nuclei: multisensory integration at the first opportunity for convergence between these two independent vestibular sensory signals. Single-unit recordings were made from the vestibular nuclei of awake macaques during yaw rotation, translation in the horizontal plane, and combinations of rotation and translation at different frequencies. The overall response magnitude of the combined translation and rotation was generally less than the sum of the magnitudes in responses to the stimuli applied independently. However, we found that under conditions in which the peaks of the rotational and translational responses were coincident these signals were approximately additive. With presentation of rotation and translation at different frequencies, rotation was attenuated more than translation, regardless of which was at a higher frequency. These data suggest a nonlinear interaction between these two sensory modalities in the vestibular nuclei, in which coincident peak responses are proportionally stronger than other, off-peak interactions. These results are similar to those reported for other forms of multisensory integration, such as audio-visual integration in the superior colliculus. NEW & NOTEWORTHY This is the first study to systematically explore the interaction of rotational and translational signals in the vestibular nuclei through independent manipulation. The results of this study demonstrate nonlinear integration leading to maximum response amplitude when the timing and direction of peak rotational and translational responses are coincident.
Jiang, Wei; Roux, Benoît
2010-07-01
Free Energy Perturbation with Replica Exchange Molecular Dynamics (FEP/REMD) offers a powerful strategy to improve the convergence of free energy computations. In particular, it has been shown previously that a FEP/REMD scheme allowing random moves within an extended replica ensemble of thermodynamic coupling parameters "lambda" can improve the statistical convergence in calculations of absolute binding free energy of ligands to proteins [J. Chem. Theory Comput. 2009, 5, 2583]. In the present study, FEP/REMD is extended and combined with an accelerated MD simulations method based on Hamiltonian replica-exchange MD (H-REMD) to overcome the additional problems arising from the existence of kinetically trapped conformations within the protein receptor. In the combined strategy, each system with a given thermodynamic coupling factor lambda in the extended ensemble is further coupled with a set of replicas evolving on a biased energy surface with boosting potentials used to accelerate the inter-conversion among different rotameric states of the side chains in the neighborhood of the binding site. Exchanges are allowed to occur alternatively along the axes corresponding to the thermodynamic coupling parameter lambda and the boosting potential, in an extended dual array of coupled lambda- and H-REMD simulations. The method is implemented on the basis of new extensions to the REPDSTR module of the biomolecular simulation program CHARMM. As an illustrative example, the absolute binding free energy of p-xylene to the nonpolar cavity of the L99A mutant of T4 lysozyme was calculated. The tests demonstrate that the dual lambda-REMD and H-REMD simulation scheme greatly accelerates the configurational sampling of the rotameric states of the side chains around the binding pocket, thereby improving the convergence of the FEP computations.
Maximum von Mises Stress in the Loading Environment of Mass Acceleration Curve
NASA Technical Reports Server (NTRS)
Glaser, Robert J.; Chen, Long Y.
2006-01-01
Method for calculating stress due to acceleration loading: 1) Part has been designed by FEA and hand calculation in one critical loading direction judged by the analyst; 2) Maximum stress can be due to loading in another direction; 3) Analysis procedure to be presented determines: a) The maximum Mises stress at any point; and b) The direction of maximum loading associated with the "stress". Concept of Mass Acceleration Curves (MAC): 1) Developed by JPL to perform preliminary structural sizing (i.e. Mariners, Voyager, Galileo, Pathfinder, MER,...MSL); 2) Acceleration of physical masses are bounded by a curve; 3) G-levels of vibro-acoustic and transient environments; 4) Convergent process before the couple loads cycle; and 5) Semi-empirical method to effectively bound the loads, not a simulation of the actual response.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Christiansen, Ove
2018-06-01
We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.
Diode magnetic-field influence on radiographic spot size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Carl A. Jr.
2012-09-04
Flash radiography of hydrodynamic experiments driven by high explosives is a well-known diagnostic technique in use at many laboratories. The Dual-Axis Radiography for Hydrodynamic Testing (DARHT) facility at Los Alamos was developed for flash radiography of large hydrodynamic experiments. Two linear induction accelerators (LIAs) produce the bremsstrahlung radiographic source spots for orthogonal views of each experiment ('hydrotest'). The 2-kA, 20-MeV Axis-I LIA creates a single 60-ns radiography pulse. For time resolution of the hydrotest dynamics, the 1.7-kA, 16.5-MeV Axis-II LIA creates up to four radiography pulses by slicing them out of a longer pulse that has a 1.6-{micro}s flattop. Bothmore » axes now routinely produce radiographic source spot sizes having full-width at half-maximum (FWHM) less than 1 mm. To further improve on the radiographic resolution, one must consider the major factors influencing the spot size: (1) Beam convergence at the final focus; (2) Beam emittance; (3) Beam canonical angular momentum; (4) Beam-motion blur; and (5) Beam-target interactions. Beam emittance growth and motion in the accelerators have been addressed by careful tuning. Defocusing by beam-target interactions has been minimized through tuning of the final focus solenoid for optimum convergence and other means. Finally, the beam canonical angular momentum is minimized by using a 'shielded source' of electrons. An ideal shielded source creates the beam in a region where the axial magnetic field is zero, thus the canonical momentum zero, since the beam is born with no mechanical angular momentum. It then follows from Busch's conservation theorem that the canonical angular momentum is minimized at the target, at least in principal. In the DARHT accelerators, the axial magnetic field at the cathode is minmized by using a 'bucking coil' solenoid with reverse polarity to cancel out whatever solenoidal beam transport field exists there. This is imperfect in practice, because of radial variation of the total field across the cathode surface, solenoid misalignments, and long-term variability of solenoid fields for given currents. Therefore, it is useful to quantify the relative importance of canonical momentum in determining the focal spot, and to establish a systematic methodology for tuning the bucking coils for minimum spot size. That is the purpose of this article. Section II provides a theoretical foundation for understanding the relative importance of the canonical momentum. Section III describes the results of simulations used to quantify beam parameters, including the momentum, for each of the accelerators. Section IV compares the two accelerators, especially with respect to mis-tuned bucking coils. Finally, Section IV concludes with a methodology for optimizing the bucking coil settings.« less
Convergence of damped inertial dynamics governed by regularized maximally monotone operators
NASA Astrophysics Data System (ADS)
Attouch, Hedy; Cabot, Alexandre
2018-06-01
In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Warm Dense Matter: Another Application for Pulsed Power Hydrodynamics
2009-06-01
Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to...controlled than are similar high explosively powered hydrodynamic experiments. While the precision and controllability of gas- gun experiments is...well established, pulsed power techniques using imploding liner offer access to convergent conditions, difficult to obtain with guns – and essential
NASA Technical Reports Server (NTRS)
Barile, Ronald G.; Fogarty, Chris; Cantrell, Chris; Melton, Gregory S.
1994-01-01
NASA personnel at Kennedy Space Center's Material Science Laboratory have developed new environmentally sound precision cleaning and verification techniques for systems and components found at the center. This technology is required to replace existing methods traditionally employing CFC-113. The new patent-pending technique of precision cleaning verification is for large components of cryogenic fluid systems. These are stainless steel, sand cast valve bodies with internal surface areas ranging from 0.2 to 0.9 sq m. Extrapolation of this technique to components of even larger sizes (by orders of magnitude) is planned. Currently, the verification process is completely manual. In the new technique, a high velocity, low volume water stream impacts the part to be verified. This process is referred to as Breathing Air/Water Impingement and forms the basis for the Impingement Verification System (IVS). The system is unique in that a gas stream is used to accelerate the water droplets to high speeds. Water is injected into the gas stream in a small, continuous amount. The air/water mixture is then passed through a converging/diverging nozzle where the gas is accelerated to supersonic velocities. These droplets impart sufficient energy to the precision cleaned surface to place non-volatile residue (NVR) contaminants into suspension in the water. The sample water is collected and its NVR level is determined by total organic carbon (TOC) analysis at 880 C. The TOC, in ppm carbon, is used to establish the NVR level. A correlation between the present gravimetric CFC113 NVR and the IVS NVR is found from experimental sensitivity factors measured for various contaminants. The sensitivity has the units of ppm of carbon per mg/sq ft of contaminant. In this paper, the equipment is described and data are presented showing the development of the sensitivity factors from a test set including four NVRs impinged from witness plates of 0.05 to 0.75 sq m.
NASA Technical Reports Server (NTRS)
Barile, Ronald G.; Fogarty, Chris; Cantrell, Chris; Melton, Gregory S.
1995-01-01
NASA personnel at Kennedy Space Center's Material Science Laboratory have developed new environmentally sound precision cleaning and verification techniques for systems and components found at the center. This technology is required to replace existing methods traditionally employing CFC-113. The new patent-pending technique of precision cleaning verification is for large components of cryogenic fluid systems. These are stainless steel, sand cast valve bodies with internal surface areas ranging from 0.2 to 0.9 m(exp 2). Extrapolation of this technique to components of even larger sizes (by orders of magnitude) is planned. Currently, the verification process is completely manual. In the new technique, a high velocity, low volume water stream impacts the part to be verified. This process is referred to as Breathing Air/Water Impingement and forms the basis for the Impingement Verification System (IVS). The system is unique in that a gas stream is used to accelerate the water droplets to high speeds. Water is injected into the gas stream in a small, continuous amount. The air/water mixture is then passed through a converging-diverging nozzle where the gas is accelerated to supersonic velocities. These droplets impart sufficient energy to the precision cleaned surface to place non-volatile residue (NVR) contaminants into suspension in the water. The sample water is collected and its NVR level is determined by total organic carbon (TOC) analysis at 880 C. The TOC, in ppm carbon, is used to establish the NVR level. A correlation between the present gravimetric CFC-113 NVR and the IVS NVR is found from experimental sensitivity factors measured for various contaminants. The sensitivity has the units of ppm of carbon per mg-ft(exp 2) of contaminant. In this paper, the equipment is described and data are presented showing the development of the sensitivity factors from a test set including four NVR's impinged from witness plates of 0.05 to 0.75 m(exp 2).
NASA Astrophysics Data System (ADS)
Song, Bongyong; Park, Justin C.; Song, William Y.
2014-11-01
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
Song, Bongyong; Park, Justin C; Song, William Y
2014-11-07
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
Upwind relaxation methods for the Navier-Stokes equations using inner iterations
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.
1992-01-01
A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.
Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.
2015-01-01
Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406
On 3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Chen, P. C.; Dame, L. T.; Holt, R. V.; Huang, H.; Hartle, M.; Gellin, S.; Allen, D. H.; Haisler, W. E.
1986-01-01
Accomplishments are described for the 2-year program, to develop advanced 3-D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades and vanes. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulations models were developed; an eight-noded mid-surface shell element, a nine-noded mid-surface shell element and a twenty-noded isoparametric solid element. A separate computer program was developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.
The 3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.
1992-01-01
A two-year program to develop advanced 3D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades, and vanes is described. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulation models were developed: an eight-noded midsurface shell element; a nine-noded midsurface shell element; and a twenty-noded isoparametric solid element. A separate computer program has been developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Florian, E-mail: florian.mueller@sam.math.ethz.ch; Jenny, Patrick, E-mail: jenny@ifd.mavt.ethz.ch; Meyer, Daniel W., E-mail: meyerda@ethz.ch
2013-10-01
Monte Carlo (MC) is a well known method for quantifying uncertainty arising for example in subsurface flow problems. Although robust and easy to implement, MC suffers from slow convergence. Extending MC by means of multigrid techniques yields the multilevel Monte Carlo (MLMC) method. MLMC has proven to greatly accelerate MC for several applications including stochastic ordinary differential equations in finance, elliptic stochastic partial differential equations and also hyperbolic problems. In this study, MLMC is combined with a streamline-based solver to assess uncertain two phase flow and Buckley–Leverett transport in random heterogeneous porous media. The performance of MLMC is compared tomore » MC for a two dimensional reservoir with a multi-point Gaussian logarithmic permeability field. The influence of the variance and the correlation length of the logarithmic permeability on the MLMC performance is studied.« less
τ hadronic spectral function moments in a nonpower QCD perturbation theory
NASA Astrophysics Data System (ADS)
Abbas, Gauhar; Ananthanarayan, B.; Caprini, I.; Fischer, J.
2016-04-01
The moments of the hadronic spectral functions are of interest for the extraction of the strong coupling and other QCD parameters from the hadronic decays of the τ lepton. We consider the perturbative behavior of these moments in the framework of a QCD nonpower perturbation theory, defined by the technique of series acceleration by conformal mappings, which simultaneously implements renormalization-group summation and has a tame large-order behavior. Two recently proposed models of the Adler function are employed to generate the higher order coefficients of the perturbation series and to predict the exact values of the moments, required for testing the properties of the perturbative expansions. We show that the contour-improved nonpower perturbation theories and the renormalization-group-summed nonpower perturbation theories have very good convergence properties for a large class of moments of the so-called ;reference model;, including moments that are poorly described by the standard expansions.
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1980-01-01
A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.
Beam Dynamics Considerations in Electron Ion Colliders
NASA Astrophysics Data System (ADS)
Krafft, Geoffrey
2015-04-01
The nuclear physics community is converging on the idea that the next large project after FRIB should be an electron-ion collider. Both Brookhaven National Lab and Thomas Jefferson National Accelerator Facility have developed accelerator designs, both of which need novel solutions to accelerator physics problems. In this talk we discuss some of the problems that must be solved and their solutions. Examples in novel beam optics systems, beam cooling, and beam polarization control will be presented. Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.
2018-02-01
Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
Zajączkowska, U; Barlow, P W
2017-07-01
Orbital movement of the Moon generates a system of gravitational fields that periodically alter the gravitational force on Earth. This lunar tidal acceleration (Etide) is known to act as an external environmental factor affecting many growth and developmental phenomena in plants. Our study focused on the lunar tidal influence on stem elongation growth, nutations and leaf movements of peppermint. Plants were continuously recorded with time-lapse photography under constant illumination as well in constant illumination following 5 days of alternating dark-light cycles. Time courses of shoot movements were correlated with contemporaneous time courses of the Etide estimates. Optical microscopy and SEM were used in anatomical studies. All plant shoot movements were synchronised with changes in the lunisolar acceleration. Using a periodogram, wavelet analysis and local correlation index, a convergence was found between the rhythms of lunisolar acceleration and the rhythms of shoot growth. Also observed were cyclical changes in the direction of rotation of stem apices when gravitational dynamics were at their greatest. After contrasting dark-light cycle experiments, nutational rhythms converged to an identical phase relationship with the Etide and almost immediately their renewed movements commenced. Amplitudes of leaf movements decreased during leaf growth up to the stage when the leaf was fully developed; the periodicity of leaf movements correlated with the Etide rhythms. For the fist time, it was documented that lunisolar acceleration is an independent rhythmic environmental signal capable of influencing the dynamics of plant stem elongation. This phenomenon is synchronised with the known effects of Etide on nutations and leaf movements. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
A microwave assisted intramolecular-furan-Diels-Alder approach to 4-substituted indoles.
Petronijevic, Filip; Timmons, Cody; Cuzzupe, Anthony; Wipf, Peter
2009-01-07
The key steps of a versatile new protocol for the convergent synthesis of 3,4-disubstituted indoles are the addition of an alpha-lithiated alkylaminofuran to a carbonyl compound, a microwave-accelerated intramolecular Diels-Alder cycloaddition and an in situ double aromatization reaction.
Research in navigation and optimization for space trajectories
NASA Technical Reports Server (NTRS)
Pines, S.; Kelley, H. J.
1979-01-01
Topics covered include: (1) initial Cartesian coordinates for rapid precision orbit prediction; (2) accelerating convergence in optimization methods using search routines by applying curvilinear projection ideas; (3) perturbation-magnitude control for difference-quotient estimation of derivatives; and (4) determining the accelerometer bias for in-orbit shuttle trajectories.
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
Effective matrix-free preconditioning for the augmented immersed interface method
NASA Astrophysics Data System (ADS)
Xia, Jianlin; Li, Zhilin; Ye, Xin
2015-12-01
We present effective and efficient matrix-free preconditioning techniques for the augmented immersed interface method (AIIM). AIIM has been developed recently and is shown to be very effective for interface problems and problems on irregular domains. GMRES is often used to solve for the augmented variable(s) associated with a Schur complement A in AIIM that is defined along the interface or the irregular boundary. The efficiency of AIIM relies on how quickly the system for A can be solved. For some applications, there are substantial difficulties involved, such as the slow convergence of GMRES (particularly for free boundary and moving interface problems), and the inconvenience in finding a preconditioner (due to the situation that only the products of A and vectors are available). Here, we propose matrix-free structured preconditioning techniques for AIIM via adaptive randomized sampling, using only the products of A and vectors to construct a hierarchically semiseparable matrix approximation to A. Several improvements over existing schemes are shown so as to enhance the efficiency and also avoid potential instability. The significance of the preconditioners includes: (1) they do not require the entries of A or the multiplication of AT with vectors; (2) constructing the preconditioners needs only O (log N) matrix-vector products and O (N) storage, where N is the size of A; (3) applying the preconditioners needs only O (N) flops; (4) they are very flexible and do not require any a priori knowledge of the structure of A. The preconditioners are observed to significantly accelerate the convergence of GMRES, with heuristical justifications of the effectiveness. Comprehensive tests on several important applications are provided, such as Navier-Stokes equations on irregular domains with traction boundary conditions, interface problems in incompressible flows, mixed boundary problems, and free boundary problems. The preconditioning techniques are also useful for several other problems and methods.
Tannamala, Pavan Kumar; Azhagarasan, Nagarasampatti Sivaprakasam; Shankar, K Chitra
2013-01-01
Conventional casting techniques following the manufacturers' recommendations are time consuming. Accelerated casting techniques have been reported, but their accuracy with base metal alloys has not been adequately studied. We measured the vertical marginal gap of nickel-chromium copings made by conventional and accelerated casting techniques and determined the clinical acceptability of the cast copings in this study. Experimental design, in vitro study, lab settings. Ten copings each were cast by conventional and accelerated casting techniques. All copings were identical, only their mold preparation schedules differed. Microscopic measurements were recorded at ×80 magnification on the perpendicular to the axial wall at four predetermined sites. The marginal gap values were evaluated by paired t test. The mean marginal gap by conventional technique (34.02 μm) is approximately 10 μm lesser than that of accelerated casting technique (44.62 μm). As the P value is less than 0.0001, there is highly significant difference between the two techniques with regard to vertical marginal gap. The accelerated casting technique is time saving and the marginal gap measured was within the clinically acceptable limits and could be an alternative to time-consuming conventional techniques.
Subduction initiation and Obduction: insights from analog models
NASA Astrophysics Data System (ADS)
Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.
2013-12-01
Subduction initiation and obduction are two poorly constrained geodynamic processes which are interrelated in a number of natural settings. Subduction initiation can be viewed as the result of a regional-scale change in plate convergence partitioning between the set of existing subduction (and collision or obduction) zones worldwide. Intraoceanic subduction initiation may also ultimately lead to obduction of dense oceanic "ophiolites" atop light continental plates. A classic example is the short-lived Peri-Arabic obduction, which took place along thousands of km almost synchronously (within ~5-10 myr), from Turkey to Oman, while the subduction zone beneath Eurasia became temporarily jammed. We herein present analog models designed to study both processes and more specifically (1) subduction initiation through the partitioning of deformation between two convergent zones (a preexisting and a potential one) and, as a consequence, (2) the possible development of obduction, which has so far never been modeled. These models explore the mechanisms of subduction initiation and obduction and test various triggering hypotheses (i.e., plate acceleration, slab crossing the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises an upper mantle modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and high-viscosity silicone plates. Convergence is simulated by pushing on a piston at one end of the model with plate tectonics like velocities (1-10 cm/yr) onto (i) a continental margin, (ii) a weakness zone with variable resistance and dip (W), (iii) an oceanic plate - with or without a spreading ridge, (iv) a subduction zone (S) dipping away from the piston and (v) an upper active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as for the Oman case). Several configurations were tested over thirty-five parametric experiments. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Measurements of displacements and internal deformation allow for a very precise and reproducible tracking of deformation. Experiments consistently demonstrate that subduction initiation chiefly depends on how the overall shortening (or convergence) is partitionned between the weakness zone (W) and the preexisting subduction zone (S). Part of the deformation is transfered to W as soon as the increased coupling across S results in 5-10% of the convergence being transfered to the upper plate. Whether obduction develops further depends on the effective strength of W. Results (1) constrain the range of physical conditions required for subduction initiation and obduction to develop/nucleate and (2) underline the key role of acceleration for triggering obduction, rather than ridge subduction or slab resistance to penetration at the 660 km discontinuity. [Agard P., Jolivet L., Vrielynck B., Burov E. & Monié P., 2007. Plate acceleration : the obduction trigger? Earth and Planetary Science Letters, 258, 428-441.
1961-01-01
As presented by Gerhard Heller of Marshall Space Flight Center's Research Projects Division in 1961, this chart illustrates three basic types of electric propulsion systems then under consideration by NASA. The ion engine (top) utilized cesium atoms ionized by hot tungsten and accelerated by an electrostatic field to produce thrust. The arc engine (middle) achieved propulsion by heating a propellant with an electric arc and then producing an expansion of the hot gas or plasma in a convergent-divergent duct. The electromagnetic, or MFD engine (bottom) manipulated strong magnetic fields to interact with a plasma and produce acceleration.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.; Vandoormaal, J. P.
1988-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.
Genes involved in convergent evolution of eusociality in bees
Woodard, S. Hollis; Fischman, Brielle J.; Venkat, Aarti; Hudson, Matt E.; Varala, Kranthi; Cameron, Sydney A.; Clark, Andrew G.; Robinson, Gene E.
2011-01-01
Eusociality has arisen independently at least 11 times in insects. Despite this convergence, there are striking differences among eusocial lifestyles, ranging from species living in small colonies with overt conflict over reproduction to species in which colonies contain hundreds of thousands of highly specialized sterile workers produced by one or a few queens. Although the evolution of eusociality has been intensively studied, the genetic changes involved in the evolution of eusociality are relatively unknown. We examined patterns of molecular evolution across three independent origins of eusociality by sequencing transcriptomes of nine socially diverse bee species and combining these data with genome sequence from the honey bee Apis mellifera to generate orthologous sequence alignments for 3,647 genes. We found a shared set of 212 genes with a molecular signature of accelerated evolution across all eusocial lineages studied, as well as unique sets of 173 and 218 genes with a signature of accelerated evolution specific to either highly or primitively eusocial lineages, respectively. These results demonstrate that convergent evolution can involve a mosaic pattern of molecular changes in both shared and lineage-specific sets of genes. Genes involved in signal transduction, gland development, and carbohydrate metabolism are among the most prominent rapidly evolving genes in eusocial lineages. These findings provide a starting point for linking specific genetic changes to the evolution of eusociality. PMID:21482769
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
General analytic results on averaging Lemaître-Tolman-Bondi models
NASA Astrophysics Data System (ADS)
Sussman, Roberto A.
2010-12-01
An effective acceleration, which mimics the effect of dark energy, may arise in the context of Buchert's scalar averaging formalism. We examine the conditions for such an acceleration to occur in the asymptotic radial range in generic spherically symmetric Lemaître-Tolman-Bondi (LTB) dust models. By looking at the behavior of covariant scalars along space slices orthogonal to the 4-velocity, we show that this effective acceleration occurs in a class of models with negative spatial curvature that are asymptotically convergent to sections of Minkowski spacetime. As a consequence, the boundary conditions that favor LTB models with an effective acceleration are not a void inhomogeneity embedded in a homogeneous FLRW background (Swiss cheese models), but a local void or clump embedded in a large cosmic void region represented by asymptotically Minkowski conditions.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
Exploring possibilities of band gap measurement with off-axis EELS in TEM.
Korneychuk, Svetlana; Partoens, Bart; Guzzinati, Giulio; Ramaneti, Rajesh; Derluyn, Joff; Haenen, Ken; Verbeeck, Jo
2018-06-01
A technique to measure the band gap of dielectric materials with high refractive index by means of energy electron loss spectroscopy (EELS) is presented. The technique relies on the use of a circular (Bessel) aperture and suppresses Cherenkov losses and surface-guided light modes by enforcing a momentum transfer selection. The technique also strongly suppresses the elastic zero loss peak, making the acquisition, interpretation and signal to noise ratio of low loss spectra considerably better, especially for excitations in the first few eV of the EELS spectrum. Simulations of the low loss inelastic electron scattering probabilities demonstrate the beneficial influence of the Bessel aperture in this setup even for high accelerating voltages. The importance of selecting the optimal experimental convergence and collection angles is highlighted. The effect of the created off-axis acquisition conditions on the selection of the transitions from valence to conduction bands is discussed in detail on a simplified isotropic two band model. This opens the opportunity for deliberately selecting certain transitions by carefully tuning the microscope parameters. The suggested approach is experimentally demonstrated and provides good signal to noise ratio and interpretable band gap signals on reference samples of diamond, GaN and AlN while offering spatial resolution in the nm range. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick; ...
2017-02-21
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
A microwave assisted intramolecular-furan-Diels–Alder approach to 4-substituted indoles†
Petronijevic, Filip; Timmons, Cody; Cuzzupe, Anthony; Wipf, Peter
2009-01-01
The key steps of a versatile new protocol for the convergent synthesis of 3,4-disubstituted indoles are the addition of an α-lithiated alkylaminofuran to a carbonyl compound, a microwave-accelerated intramolecular Diels–Alder cycloaddition and an in situ double aromatization reaction. PMID:19082013
Development and acceleration of unstructured mesh-based cfd solver
NASA Astrophysics Data System (ADS)
Emelyanov, V.; Karpenko, A.; Volkov, K.
2017-06-01
The study was undertaken as part of a larger effort to establish a common computational fluid dynamics (CFD) code for simulation of internal and external flows and involves some basic validation studies. The governing equations are solved with ¦nite volume code on unstructured meshes. The computational procedure involves reconstruction of the solution in each control volume and extrapolation of the unknowns to find the flow variables on the faces of control volume, solution of Riemann problem for each face of the control volume, and evolution of the time step. The nonlinear CFD solver works in an explicit time-marching fashion, based on a three-step Runge-Kutta stepping procedure. Convergence to a steady state is accelerated by the use of geometric technique and by the application of Jacobi preconditioning for high-speed flows, with a separate low Mach number preconditioning method for use with low-speed flows. The CFD code is implemented on graphics processing units (GPUs). Speedup of solution on GPUs with respect to solution on central processing units (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less
NASA Astrophysics Data System (ADS)
Arisa, D.; Heki, K.
2014-12-01
The Izu-Bonin islands lies along the convergent boundary between the subducting Pacific plate (PA) and the overriding Philippine Sea plate (PH) in the western Pacific. Nishimura (2011) found that the back-arc rifting goes on behind the Izu arc by studying the horizontal velocities of GNSS stations on the Izu islands. Here we show that this rifting has accelerated in 2004 using GNSS data at Aogashima, Hachijoujima, and Mikurajima stations. The back-arc rifting behind the Izu islands can be seen as the increasing distance between stations in the Izu-Bonin islands and stations located in the stable part of PH. We found that their movement showed clear acceleration around the third quarter of 2004. Obtaining the Euler vector of the PH is necessary to analyzed the movement of each stations relative to the other stations on the same plate. The analyzing of GPS timeseries leads us to one initial conclusion that some accelerated movement started to occur in the third quarter of 2004. This event was closely related to the earthquake on May 29, 2004 in Nankai Trough and September 5, 2004 earthquake near the triple junction of Sagami Trough. The analyzing process help us to understand that this accelerated movement was not the afterslip of any of these earthquakes, but it was triggering these area to move faster and further than it was. We first rule out the best possible cause by constraining the onset time of the accelerated movement, and correlating it with the earthquakes. May 29, 2004 earthquake (M6.5) at the PA-PH boundary clearly lacked the jump which should mark the onset of the eastward slow movement. Moreover, additional velocity vectors do not converge to the epicenter, and onset time that minimizes the post-fit residual is significantly later than May. We therefore conclude that accelerated movement started in 2004 was not due to the afterslip of interplate earthquake in May 29. On the next step we found that the onset time coincides with the occurrence of September 5, 2004 We found that the accelerated movement vectors of these islands are almost parallel with each other, and perpendicular to the rift axis. We hypothesize that the seismic wave radiated from the epicenter of this earthquake dynamically triggered the acceleration of the back arc opening in the Izu Arc.
Slip flow through a converging microchannel: experiments and 3D simulations
NASA Astrophysics Data System (ADS)
Varade, Vijay; Agrawal, Amit; Pradeep, A. M.
2015-02-01
An experimental and 3D numerical study of gaseous slip flow through a converging microchannel is presented in this paper. The measurements reported are with nitrogen gas flowing through the microchannel with convergence angles (4°, 8° and 12°), hydraulic diameters (118, 147 and 177 µm) and lengths (10, 20 and 30 mm). The measurements cover the entire slip flow regime and a part of the continuum and transition regimes (the Knudsen number is between 0.0004 and 0.14); the flow is laminar (the Reynolds number is between 0.5 and 1015). The static pressure drop is measured for various mass flow rates. The overall pressure drop increases with a decrease in the convergence angle and has a relatively large contribution of the viscous component. The numerical solutions of the Navier-Stokes equations with Maxwell’s slip boundary condition explore two different flow behaviors: uniform centerline velocity with linear pressure variation in the initial and the middle part of the microchannel and flow acceleration with nonlinear pressure variation in the last part of the microchannel. The centerline velocity and the wall shear stress increase with a decrease in the convergence angle. The concept of a characteristic length scale for a converging microchannel is also explored. The location of the characteristic length is a function of the Knudsen number and approaches the microchannel outlet with rarefaction. These results on gaseous slip flow through converging microchannels are observed to be considerably different than continuum flow.
Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.
Four-Dimensional Golden Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2015-02-25
The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less
On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers
2012-08-01
the less exact one is solved later — assigned as step 4 of Algorithm 2 — because at each iteration , the ADM updates the variables in the Gauss - Seidel ...k) and that of an accelerated version descends at O(1/k2). Then, work [14] establishes the same rates on a Gauss - Seidel version and requires only one... iteration Fig. 5.1. Convergence curves of ADM for the elastic net problem. 17 0 50 100 150 200 0.75 0.8 0.85 0.9 0.95 1 Iteration ‖u k + 1 − u ∗ ‖ 2 G / ‖u k
Constructing analytic solutions on the Tricomi equation
NASA Astrophysics Data System (ADS)
Ghiasi, Emran Khoshrouye; Saleh, Reza
2018-04-01
In this paper, homotopy analysis method (HAM) and variational iteration method (VIM) are utilized to derive the approximate solutions of the Tricomi equation. Afterwards, the HAM is optimized to accelerate the convergence of the series solution by minimizing its square residual error at any order of the approximation. It is found that effect of the optimal values of auxiliary parameter on the convergence of the series solution is not negligible. Furthermore, the present results are found to agree well with those obtained through a closed-form equation available in the literature. To conclude, it is seen that the two are effective to achieve the solution of the partial differential equations.
Internal performance of a hybrid axisymmetric/nonaxisymmetric convergent-divergent nozzle
NASA Technical Reports Server (NTRS)
Taylor, John G.
1991-01-01
An investigation was conducted in the static test facility of the Langley 16-foot transonic tunnel to determine the internal performance of a hybrid axisymmetric/nonaxisymmetric nozzle in forward-thrust mode. Nozzle cross-sections in the spherical convergent section were axisymmetric whereas cross-sections in the divergent flap area nonaxisymmetric (two-dimensional). Nozzle concepts simulating dry and afterburning power settings were investigated. Both subsonic cruise and supersonic cruise expansion ratios were tested for the dry power nozzle concepts. Afterburning power configurations were tested at an expansion ratio typical for subsonic acceleration. The spherical convergent flaps were designed in such a way that the transition from axisymmetric to nonaxisymmetric cross-section occurred in the region of the nozzle throat. Three different nozzle throat geometries were tested for each nozzle power setting. High-pressure air was used to simulate jet exhaust at nozzle pressure ratios up to 12.0.
NASA Astrophysics Data System (ADS)
Somoza, R.
1998-05-01
Recently published seafloor data around the Antarctica plate boundaries, as well as calibration of the Cenozoic Magnetic Polarity Time Scale, allow a reevaluation of the Nazca (Farallon)-South America relative convergence kinematics since late Middle Eocene time. The new reconstruction parameters confirm the basic characteristics determined in previous studies. However, two features are notable in the present data set: a strong increase in convergence rate in Late Oligocene time, and a slowdown during Late Miocene time. The former is coeval with the early development of important tectonic characteristics of the present Central Andes, such as compressional failure in wide areas of the region, and the establishment of Late Cenozoic magmatism. This supports the idea that a relationship exists between strong acceleration of convergence and mountain building in the Central Andean region.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
2014-01-01
We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
1999-01-01
This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).
Measurement of inflight shell areal density near peak velocity using a self backlighting technique
NASA Astrophysics Data System (ADS)
Pickworth, L. A.; Hammel, B. A.; Smalyuk, V. A.; MacPhee, A. G.; Scott, H. A.; Robey, H. F.; Landen, O. L.; Barrios, M. A.; Regan, S. P.; Schneider, M. B.; Hoppe, M., Jr.; Kohut, T.; Holunga, D.; Walters, C.; Haid, B.; Dayton, M.
2016-05-01
The growth of perturbations in inertial confinement fusion (ICF) capsules can lead to significant variation of inflight shell areal density (ρR), ultimately resulting in poor compression and ablator material mixing into the hotspot. As the capsule is accelerated inward, the perturbation growth results from the initial shock-transit through the shell and then amplification by Rayleigh-Taylor as the shell accelerates inwards. Measurements of ρR perturbations near peak implosion velocity (PV) are essential to our understanding of ICF implosions because they reflect the integrity of the capsule, after the inward acceleration growth is complete, of the actual shell perturbations including native capsule surface roughness and “isolated defects”. Quantitative measurements of shell-ρR perturbations in capsules near PV are challenging, requiring a new method with which to radiograph the shell. An innovative method, utilized in this paper, is to use the self-emission from the hotspot to “self- backlight” the shell inflight. However, with nominal capsule fills there is insufficient self-emission for this method until the capsule nears peak compression (PC). We produce a sufficiently bright continuum self-emission backlighter through the addition of a high-Z gas (∼ 1% Ar) to the capsule fill. This provides a significant (∼10x) increase in emission at hυ∼8 keV over nominal fills. “Self backlit” radiographs are obtained for times when the shock is rebounding from the capsule center, expanding out to meet the incoming shell, providing a means to sample the capsule optical density though only one side, as it converges through PV.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
ERIC Educational Resources Information Center
Watkins, Jim
2012-01-01
Accelerating cross-border investing activity transformed global financial markets during the latter part of the 20th century. Due to lack of trans-cultural consistency comparability in financial reporting was compromised hindering multinational investment. In light thereof there is a movement afoot among international authorities to converge…
California Policy Options to Accelerate Latino Student Success in Higher Education
ERIC Educational Resources Information Center
Santiago, Deborah A.
2006-01-01
California policy makers and institutional leaders are making critical policy, programmatic, and budgetary decisions affecting segments of the state's population that lack sufficient levels of formal training and education. These decisions are occurring at a time when five critical trends are converging in the state. These trends are: (1)…
Plasma motion in the Venus ionosphere: Transition to supersonic flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitten, R.C.; Barnes, A.; McCormick, P.T.
1991-07-01
A remarkable feature of the ionosphere of Venus is the presence of nightward supersonic flows at high altitude near the terminator. In general the steady flow of an ideal gas admits a subsonic-supersonic transition only in the presence of special conditions, such as a convergence of the flow followed by divergence, or external forces. In this paper, the authors show that the relatively high pressure dayside plasma wells up slowly, and at high altitude it is accelerated horizontally through a relatively constricted region near the terminator toward the low-density nightside. In effect, the plasma flows through a nozzle that ismore » first converging, then diverging, permitting the transition to supersonic flow. Analysis of results from previously published models of the plasma flow in the upper ionosphere of Venus shows how such a nozzle is formed. The model plasma does indeed accelerate to supersonic speeds, reaching sonic speed just behind the terminator. The computed speeds prove to be close to those observed by the Pioneer Venus orbiter, and the ion transport rates are sufficient to produce and maintain the nightside ionosphere.« less
Effect of cervicolabyrinthine impulsation on the spinal reflex apparatus
NASA Technical Reports Server (NTRS)
Yarotskiy, A. I.
1980-01-01
In view of the fact that the convergence effect of vestibular impulsation may both stimulate and inhibit intra and intersystemic coordination of physiological processes, an attempt was made to define the physiological effect on the spinal reflex apparatus of the convergence of cervicolabyrinthine impulsation on a model of the unconditioned motor reflex as a mechanism of the common final pathway conditioning the formation and realization of a focused beneficial result of human motor activities. More than 100 persons subjected to rolling effect and angular acceleration during complexly coordinated muscular loading were divided according to typical variants of the functional structure of the patella reflex in an experiment requiring 30 rapid counterclockwise head revolutions at 2/sec with synchronous recording of a 20 item series of patella reflex acts. A knee jerk coefficient was used in calculations. In 85 percent of the cases 2 patellar reflexograms show typical braking and release of knee reflex and 1 shows an extreme local variant. The diagnostic and prognostic value of these tests is suggested for determining adaptive possibilities of functional systems in respect to acceleration and proprioceptive stimuli.
An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes
Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin
2014-01-01
In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179
Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V
2014-08-06
In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.
Accelerated Training for Large Feedforward Neural Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
In this paper we introduce a new training algorithm, the scaled variable metric (SVM) method. Our approach attempts to increase the convergence rate of the modified variable metric method. It is also combined with the RBackprop algorithm, which computes the product of the matrix of second derivatives (Hessian) with an arbitrary vector. The RBackprop method allows us to avoid computationally expensive, direct line searches. In addition, it can be utilized in the new, 'predictive' updating technique of the inverse Hessian approximation. We have used directional slope testing to adjust the step size and found that this strategy works exceptionally well in conjunction with the Rbackprop algorithm. Some supplementary, but nevertheless important enhancements to the basic training scheme such as improved setting of a scaling factor for the variable metric update and computationally more efficient procedure for updating the inverse Hessian approximation are presented as well. We summarize by comparing the SVM method with four first- and second- order optimization algorithms including a very effective implementation of the Levenberg-Marquardt method. Our tests indicate promising computational speed gains of the new training technique, particularly for large feedforward networks, i.e., for problems where the training process may be the most laborious.
Rotational Acceleration during Head Impact Resulting from Different Judo Throwing Techniques
MURAYAMA, Haruo; HITOSUGI, Masahito; MOTOZAWA, Yasuki; OGINO, Masahiro; KOYAMA, Katsuhiro
2014-01-01
Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchigari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s2 to 5,525.9 rad/s2 and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s2 to 2,104.1 rad/s2 and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami. PMID:24477065
Rotational acceleration during head impact resulting from different judo throwing techniques.
Murayama, Haruo; Hitosugi, Masahito; Motozawa, Yasuki; Ogino, Masahiro; Koyama, Katsuhiro
2014-01-01
Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchi-gari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s(2) to 5,525.9 rad/s(2) and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s(2) to 2,104.1 rad/s(2) and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami.
BIOCONAID System (Bionic Control of Acceleration Induced Dimming). Final Report.
ERIC Educational Resources Information Center
Rogers, Dana B.; And Others
The system described represents a new technique for enhancing the fidelity of flight simulators during high acceleration maneuvers. This technique forces the simulator pilot into active participation and energy expenditure similar to the aircraft pilot undergoing actual accelerations. The Bionic Control of Acceleration Induced Dimming (BIOCONAID)…
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
NASA Astrophysics Data System (ADS)
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
Accelerometer Data Analysis and Presentation Techniques
NASA Technical Reports Server (NTRS)
Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy
1997-01-01
The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.
Emerging interdisciplinary fields in the coming intelligence/convergence era
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2012-09-01
Dramatic advances are in the horizon resulting from rapid pace of development of several technologies, including, computing, communication, mobile, robotic, and interactive technologies. These advances, along with the trend towards convergence of traditional engineering disciplines with physical, life and other science disciplines will result in the development of new interdisciplinary fields, as well as in new paradigms for engineering practice in the coming intelligence/convergence era (post-information age). The interdisciplinary fields include Cyber Engineering, Living Systems Engineering, Biomechatronics/Robotics Engineering, Knowledge Engineering, Emergent/Complexity Engineering, and Multiscale Systems engineering. The paper identifies some of the characteristics of the intelligence/convergence era, gives broad definition of convergence, describes some of the emerging interdisciplinary fields, and lists some of the academic and other organizations working in these disciplines. The need is described for establishing a Hierarchical Cyber-Physical Ecosystem for facilitating interdisciplinary collaborations, and accelerating development of skilled workforce in the new fields. The major components of the ecosystem are listed. The new interdisciplinary fields will yield critical advances in engineering practice, and help in addressing future challenges in broad array of sectors, from manufacturing to energy, transportation, climate, and healthcare. They will also enable building large future complex adaptive systems-of-systems, such as intelligent multimodal transportation systems, optimized multi-energy systems, intelligent disaster prevention systems, and smart cities.
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
NASA Technical Reports Server (NTRS)
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
NASA Astrophysics Data System (ADS)
Singh, Randhir; Das, Nilima; Kumar, Jitendra
2017-06-01
An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator concepts aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
NASA Astrophysics Data System (ADS)
Colby, Eric R.; Len, L. K.
Most particle accelerators today are expensive devices found only in the largest laboratories, industries, and hospitals. Using techniques developed nearly a century ago, the limiting performance of these accelerators is often traceable to material limitations, power source capabilities, and the cost tolerance of the application. Advanced accelerator conceptsa aim to increase the gradient of accelerators by orders of magnitude, using new power sources (e.g. lasers and relativistic beams) and new materials (e.g. dielectrics, metamaterials, and plasmas). Worldwide, research in this area has grown steadily in intensity since the 1980s, resulting in demonstrations of accelerating gradients that are orders of magnitude higher than for conventional techniques. While research is still in the early stages, these techniques have begun to demonstrate the potential to radically change accelerators, making them much more compact, and extending the reach of these tools of science into the angstrom and attosecond realms. Maturation of these techniques into robust, engineered devices will require sustained interdisciplinary, collaborative R&D and coherent use of test infrastructure worldwide. The outcome can potentially transform how accelerators are used.
NASA Astrophysics Data System (ADS)
Artemenko, I. I.; Golovanov, A. A.; Kostyukov, I. Yu.; Kukushkina, T. M.; Lebedev, V. S.; Nerush, E. N.; Samsonov, A. S.; Serebryakov, D. A.
2016-12-01
Studies of phenomena accompanying the interaction of superstrong electromagnetic fields with matter, in particular, the generation of an electron-positron plasma, acceleration of electrons and ions, and the generation of hard electromagnetic radiation are briefly reviewed. The possibility of using thin films to initiate quantum electrodynamics cascades in the field of converging laser pulses is analyzed. A model is developed to describe the formation of a plasma cavity behind a laser pulse in the transversely inhomogeneous plasma and the generation of betatron radiation by electrons accelerated in this cavity. Features of the generation of gamma radiation, as well as the effect of quantum electrodynamics effects on the acceleration of ions, at the interaction of intense laser pulses with solid targets are studied.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
1998-02-01
provide the aircrew and passengers with a level of protection commensurate with the risk of operating aircraft in the military and civilian...the time taken to reach peak acceleration and upon the peak acceleration level attained. Long duration acceleration, which can be experienced in...acceleration depends principally on the plateau level of the acceleration imposed on the body, as the response to long duration acceleration is due
Convergence behavior of delayed discrete cellular neural network without periodic coefficients.
Wang, Jinling; Jiang, Haijun; Hu, Cheng; Ma, Tianlong
2014-05-01
In this paper, we study convergence behaviors of delayed discrete cellular neural networks without periodic coefficients. Some sufficient conditions are derived to ensure all solutions of delayed discrete cellular neural network without periodic coefficients converge to a periodic function, by applying mathematical analysis techniques and the properties of inequalities. Finally, some examples showing the effectiveness of the provided criterion are given. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
Numerical techniques for parameter identification in distributed-parameter systems are developed analytically. A general convergence and stability framework (for continuous dependence on observations) is derived for first-order systems on the basis of (1) a weak formulation in terms of sesquilinear forms and (2) the resolvent convergence form of the Trotter-Kato approximation. The extension of this framework to second-order systems is considered.
Weak lensing probe of cubic Galileon model
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.
2018-06-01
The cubic Galileon model containing the lowest non-trivial order action of the full Galileon action can produce the stable late-time cosmic acceleration. This model can have a significant role in the growth of structures. The signatures of the cubic Galileon model in the structure formation can be probed by the weak lensing statistics. Weak lensing convergence statistics is one of the strongest probes to the structure formation and hence it can probe the dark energy or modified theories of gravity models. In this work, we investigate the detectability of the cubic Galileon model from the ΛCDM model or from the canonical quintessence model through the convergence power spectrum and bi-spectrum.
Measurement of bi-directional ion acceleration along a convergent-divergent magnetic nozzle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yunchao, E-mail: yunchao.zhang@anu.edu.au; Charles, Christine; Boswell, Rod
Bi-directional plasma expansion resulting in the formation of ion beams travelling in opposite directions is respectively measured in the converging and diverging parts of a magnetic nozzle created using a low-pressure helicon radio-frequency plasma source. The axial profile of ion saturation current along the nozzle is closely correlated to that of the magnetic flux density, and the ion “swarm” has a zero convective velocity at the magnetic throat where plasma generation is localized, thereby balancing the bi-directional particle loss. The ion beam potentials measured on both sides of the magnetic nozzle show results consistent with the maximum plasma potential measuredmore » at the throat.« less
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong
2018-02-01
A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.
NASA Astrophysics Data System (ADS)
Parekh, Devang; Nguyen, Nguyen X.
2018-02-01
The recent advent of Ultra-high-definition television (also known as Ultra HD television, Ultra HD, UHDTV, UHD and Super Hi-Vision) has accelerated a demand for a Fiber-in-the-Premises video communication (VCOM) solution that converges toward 100Gbps and Beyond. Hybrid Active-Optical-Cables (AOC) is a holistic connectivity platform well suited for this "The Last Yard" connectivity; as it combines both copper and fiber optics to deliver a high data-rate and power transmission needed. While technically feasible yet challenging to manufacture, hybrid-AOC could be a holygrail fiber-optics solution that dwarfs the volume of both telecom and datacom connection in the foreseeable future.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.
The solution of transcendental equations
NASA Technical Reports Server (NTRS)
Agrawal, K. M.; Outlaw, R.
1973-01-01
Some of the existing methods to globally approximate the roots of transcendental equations namely, Graeffe's method, are studied. Summation of the reciprocated roots, Whittaker-Bernoulli method, and the extension of Bernoulli's method via Koenig's theorem are presented. The Aitken's delta squared process is used to accelerate the convergence. Finally, the suitability of these methods is discussed in various cases.
Rapidly converging multigrid reconstruction of cone-beam tomographic data
NASA Astrophysics Data System (ADS)
Myers, Glenn R.; Kingston, Andrew M.; Latham, Shane J.; Recur, Benoit; Li, Thomas; Turner, Michael L.; Beeching, Levi; Sheppard, Adrian P.
2016-10-01
In the context of large-angle cone-beam tomography (CBCT), we present a practical iterative reconstruction (IR) scheme designed for rapid convergence as required for large datasets. The robustness of the reconstruction is provided by the "space-filling" source trajectory along which the experimental data is collected. The speed of convergence is achieved by leveraging the highly isotropic nature of this trajectory to design an approximate deconvolution filter that serves as a pre-conditioner in a multi-grid scheme. We demonstrate this IR scheme for CBCT and compare convergence to that of more traditional techniques.
The augmented Lagrangian method for parameter estimation in elliptic systems
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Kunisch, Karl
1990-01-01
In this paper a new technique for the estimation of parameters in elliptic partial differential equations is developed. It is a hybrid method combining the output-least-squares and the equation error method. The new method is realized by an augmented Lagrangian formulation, and convergence as well as rate of convergence proofs are provided. Technically the critical step is the verification of a coercivity estimate of an appropriately defined Lagrangian functional. To obtain this coercivity estimate a seminorm regularization technique is used.
Verification of Eulerian-Eulerian and Eulerian-Lagrangian simulations for fluid-particle flows
NASA Astrophysics Data System (ADS)
Kong, Bo; Patel, Ravi G.; Capecelatro, Jesse; Desjardins, Olivier; Fox, Rodney O.
2017-11-01
In this work, we study the performance of three simulation techniques for fluid-particle flows: (1) a volume-filtered Euler-Lagrange approach (EL), (2) a quadrature-based moment method using the anisotropic Gaussian closure (AG), and (3) a traditional two-fluid model. By simulating two problems: particles in frozen homogeneous isotropic turbulence (HIT), and cluster-induced turbulence (CIT), the convergence of the methods under grid refinement is found to depend on the simulation method and the specific problem, with CIT simulations facing fewer difficulties than HIT. Although EL converges under refinement for both HIT and CIT, its statistical results exhibit dependence on the techniques used to extract statistics for the particle phase. For HIT, converging both EE methods (TFM and AG) poses challenges, while for CIT, AG and EL produce similar results. Overall, all three methods face challenges when trying to extract converged, parameter-independent statistics due to the presence of shocks in the particle phase. National Science Foundation and National Energy Technology Laboratory.
Kelly, S C; O'Rourke, M J
2010-01-01
This work reports on the implementation and validation of a two-system, single-analysis, fluid-structure interaction (FSI) technique that uses the finite volume (FV) method for performing simulations on abdominal aortic aneurysm (AAA) geometries. This FSI technique, which was implemented in OpenFOAM, included fluid and solid mesh motion and incorporated a non-linear material model to represent AAA tissue. Fully implicit coupling was implemented, ensuring that both the fluid and solid domains reached convergence within each time step. The fluid and solid parts of the FSI code were validated independently through comparison with experimental data, before performing a complete FSI simulation on an idealized AAA geometry. Results from the FSI simulation showed that a vortex formed at the proximal end of the aneurysm during systolic acceleration, and moved towards the distal end of the aneurysm during diastole. Wall shear stress (WSS) values were found to peak at both the proximal and distal ends of the aneurysm and remain low along the centre of the aneurysm. The maximum von Mises stress in the aneurysm wall was found to be 408kPa, and this occurred at the proximal end of the aneurysm, while the maximum displacement of 2.31 mm occurred in the centre of the aneurysm. These results were found to be consistent with results from other FSI studies in the literature.
Density-functional theory simulation of large quantum dots
NASA Astrophysics Data System (ADS)
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Semi-stochastic full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.
2012-02-01
In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).
Generalized conjugate-gradient methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1991-01-01
A generalized conjugate-gradient method is used to solve the two-dimensional, compressible Navier-Stokes equations of fluid flow. The equations are discretized with an implicit, upwind finite-volume formulation. Preconditioning techniques are incorporated into the new solver to accelerate convergence of the overall iterative method. The superiority of the new solver is demonstrated by comparisons with a conventional line Gauss-Siedel Relaxation solver. Computational test results for transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (M = 6.0 shock-on-shock phenoena on a cylindrical leading edge) are presented. When applied to the transonic cascade case, the new solver is 4.4 times faster in terms of number of iterations and 3.1 times faster in terms of CPU time than the Relaxation solver. For the hypersonic shock case, the new solver is 3.0 times faster in terms of number of iterations and 2.2 times faster in terms of CPU time than the Relaxation solver.
Mamoshina, Polina; Ojomoko, Lucy; Yanovich, Yury; Ostrovski, Alex; Botezatu, Alex; Prikhodko, Pavel; Izumchenko, Eugene; Aliper, Alexander; Romantsov, Konstantin; Zhebrak, Alexander; Ogu, Iraneus Obioma; Zhavoronkov, Alex
2018-01-01
The increased availability of data and recent advancements in artificial intelligence present the unprecedented opportunities in healthcare and major challenges for the patients, developers, providers and regulators. The novel deep learning and transfer learning techniques are turning any data about the person into medical data transforming simple facial pictures and videos into powerful sources of data for predictive analytics. Presently, the patients do not have control over the access privileges to their medical records and remain unaware of the true value of the data they have. In this paper, we provide an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions that may be used to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data as well with the incentives to undergo constant health monitoring. We introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship-value of the data. We also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare. A secure and transparent distributed personal data marketplace utilizing blockchain and deep learning technologies may be able to resolve the challenges faced by the regulators and return the control over personal data including medical records back to the individuals. PMID:29464026
Mamoshina, Polina; Ojomoko, Lucy; Yanovich, Yury; Ostrovski, Alex; Botezatu, Alex; Prikhodko, Pavel; Izumchenko, Eugene; Aliper, Alexander; Romantsov, Konstantin; Zhebrak, Alexander; Ogu, Iraneus Obioma; Zhavoronkov, Alex
2018-01-19
The increased availability of data and recent advancements in artificial intelligence present the unprecedented opportunities in healthcare and major challenges for the patients, developers, providers and regulators. The novel deep learning and transfer learning techniques are turning any data about the person into medical data transforming simple facial pictures and videos into powerful sources of data for predictive analytics. Presently, the patients do not have control over the access privileges to their medical records and remain unaware of the true value of the data they have. In this paper, we provide an overview of the next-generation artificial intelligence and blockchain technologies and present innovative solutions that may be used to accelerate the biomedical research and enable patients with new tools to control and profit from their personal data as well with the incentives to undergo constant health monitoring. We introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship-value of the data. We also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare. A secure and transparent distributed personal data marketplace utilizing blockchain and deep learning technologies may be able to resolve the challenges faced by the regulators and return the control over personal data including medical records back to the individuals.
Digital electron diffraction – seeing the whole picture
Beanland, Richard; Thomas, Paul J.; Woodward, David I.; Thomas, Pamela A.; Roemer, Rudolf A.
2013-01-01
The advantages of convergent-beam electron diffraction for symmetry determination at the scale of a few nm are well known. In practice, the approach is often limited due to the restriction on the angular range of the electron beam imposed by the small Bragg angle for high-energy electron diffraction, i.e. a large convergence angle of the incident beam results in overlapping information in the diffraction pattern. Techniques have been generally available since the 1980s which overcome this restriction for individual diffracted beams, by making a compromise between illuminated area and beam convergence. Here a simple technique is described which overcomes all of these problems using computer control, giving electron diffraction data over a large angular range for many diffracted beams from the volume given by a focused electron beam (typically a few nm or less). The increase in the amount of information significantly improves the ease of interpretation and widens the applicability of the technique, particularly for thin materials or those with larger lattice parameters. PMID:23778099
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
Optimal Lorentz-augmented spacecraft formation flying in elliptic orbits
NASA Astrophysics Data System (ADS)
Huang, Xu; Yan, Ye; Zhou, Yang
2015-06-01
An electrostatically charged spacecraft accelerates as it moves through the Earth's magnetic field due to the induced Lorentz force, providing a new means of propellantless electromagnetic propulsion for orbital maneuvers. The feasibility of Lorentz-augmented spacecraft formation flying in elliptic orbits is investigated in this paper. Assuming the Earth's magnetic field as a tilted dipole corotating with Earth, a nonlinear dynamical model that characterizes the orbital motion of Lorentz spacecraft in the vicinity of arbitrary elliptic orbits is developed. To establish a predetermined formation configuration at given terminal time, pseudospectral method is used to solve the optimal open-loop trajectories of hybrid control inputs consisted of Lorentz acceleration and thruster-generated control acceleration. A nontilted dipole model is also introduced to analyze the effect of dipole tilt angle via comparisons with the tilted one. Meanwhile, to guarantee finite-time convergence and system robustness against external perturbations, a continuous fast nonsingular terminal sliding mode controller is designed and the closed-loop system stability is proved by Lyapunov theory. Numerical simulations substantiate the validity of proposed open-loop and closed-loop control schemes, and the results indicate that an almost propellantless formation establishment can be achieved by choosing appropriate objective function in the pseudospectral method. Furthermore, compared to the nonsingular terminal sliding mode controller, the closed-loop controller presents superior convergence rate with only a bit more control effort. And the proposed controller can be applied in other Lorentz-augmented relative orbital control problems.
An accelerated proximal augmented Lagrangian method and its application in compressive sensing.
Sun, Min; Liu, Jing
2017-01-01
As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
Haynes, Jeffrey D [Stuart, FL; Sanders, Stuart A [Palm Beach Gardens, FL
2009-06-09
A nozzle for use in a cold spray technique is described. The nozzle has a passageway for spraying a powder material, the passageway having a converging section and a diverging section, and at least the diverging section being formed from polybenzimidazole. In one embodiment of the nozzle, the converging section is also formed from polybenzimidazole.
Accelerated Cartesian expansions for the rapid solution of periodic multiscale problems
Baczewski, Andrew David; Dault, Daniel L.; Shanker, Balasubramaniam
2012-07-03
We present an algorithm for the fast and efficient solution of integral equations that arise in the analysis of scattering from periodic arrays of PEC objects, such as multiband frequency selective surfaces (FSS) or metamaterial structures. Our approach relies upon the method of Accelerated Cartesian Expansions (ACE) to rapidly evaluate the requisite potential integrals. ACE is analogous to FMM in that it can be used to accelerate the matrix vector product used in the solution of systems discretized using MoM. Here, ACE provides linear scaling in both CPU time and memory. Details regarding the implementation of this method within themore » context of periodic systems are provided, as well as results that establish error convergence and scalability. In addition, we also demonstrate the applicability of this algorithm by studying several exemplary electrically dense systems.« less
NASA Astrophysics Data System (ADS)
Wang, Xiaowei; Li, Huiping; Li, Zhichao
2018-04-01
The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.
NASA Astrophysics Data System (ADS)
Demianski, Marek; Piedipalumbo, Ester; Sawant, Disha; Amati, Lorenzo
2017-02-01
Context. Explaining the accelerated expansion of the Universe is one of the fundamental challenges in physics today. Cosmography provides information about the evolution of the universe derived from measured distances, assuming only that the space time geometry is described by the Friedman-Lemaitre-Robertson-Walker metric, and adopting an approach that effectively uses only Taylor expansions of basic observables. Aims: We perform a high-redshift analysis to constrain the cosmographic expansion up to the fifth order. It is based on the Union2 type Ia supernovae data set, the gamma-ray burst Hubble diagram, a data set of 28 independent measurements of the Hubble parameter, baryon acoustic oscillations measurements from galaxy clustering and the Lyman-α forest in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), and some Gaussian priors on h and ΩM. Methods: We performed a statistical analysis and explored the probability distributions of the cosmographic parameters. By building up their regions of confidence, we maximized our likelihood function using the Markov chain Monte Carlo method. Results: Our high-redshift analysis confirms that the expansion of the Universe currently accelerates; the estimation of the jerk parameter indicates a possible deviation from the standard ΛCDM cosmological model. Moreover, we investigate implications of our results for the reconstruction of the dark energy equation of state (EOS) by comparing the standard technique of cosmography with an alternative approach based on generalized Padé approximations of the same observables. Because these expansions converge better, is possible to improve the constraints on the cosmographic parameters and also on the dark matter EOS. Conclusions: The estimation of the jerk and the DE parameters indicates at 1σ a possible deviation from the ΛCDM cosmological model.
Gaussian Accelerated Molecular Dynamics in NAMD
2016-01-01
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for “unconstrained” enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules. PMID:28034310
Gaussian Accelerated Molecular Dynamics in NAMD.
Pang, Yui Tik; Miao, Yinglong; Wang, Yi; McCammon, J Andrew
2017-01-10
Gaussian accelerated molecular dynamics (GaMD) is a recently developed enhanced sampling technique that provides efficient free energy calculations of biomolecules. Like the previous accelerated molecular dynamics (aMD), GaMD allows for "unconstrained" enhanced sampling without the need to set predefined collective variables and so is useful for studying complex biomolecular conformational changes such as protein folding and ligand binding. Furthermore, because the boost potential is constructed using a harmonic function that follows Gaussian distribution in GaMD, cumulant expansion to the second order can be applied to recover the original free energy profiles of proteins and other large biomolecules, which solves a long-standing energetic reweighting problem of the previous aMD method. Taken together, GaMD offers major advantages for both unconstrained enhanced sampling and free energy calculations of large biomolecules. Here, we have implemented GaMD in the NAMD package on top of the existing aMD feature and validated it on three model systems: alanine dipeptide, the chignolin fast-folding protein, and the M 3 muscarinic G protein-coupled receptor (GPCR). For alanine dipeptide, while conventional molecular dynamics (cMD) simulations performed for 30 ns are poorly converged, GaMD simulations of the same length yield free energy profiles that agree quantitatively with those of 1000 ns cMD simulation. Further GaMD simulations have captured folding of the chignolin and binding of the acetylcholine (ACh) endogenous agonist to the M 3 muscarinic receptor. The reweighted free energy profiles are used to characterize the protein folding and ligand binding pathways quantitatively. GaMD implemented in the scalable NAMD is widely applicable to enhanced sampling and free energy calculations of large biomolecules.
Berkeley Proton Linear Accelerator
DOE R&D Accomplishments Database
Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.
1953-10-13
A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.
Variable convergence liquid layer implosions on the National Ignition Facility
NASA Astrophysics Data System (ADS)
Zylstra, A. B.; Yi, S. A.; Haines, B. M.; Olson, R. E.; Leeper, R. J.; Braun, T.; Biener, J.; Kline, J. L.; Batha, S. H.; Berzak Hopkins, L.; Bhandarkar, S.; Bradley, P. A.; Crippen, J.; Farrell, M.; Fittinghoff, D.; Herrmann, H. W.; Huang, H.; Khan, S.; Kong, C.; Kozioziemski, B. J.; Kyrala, G. A.; Ma, T.; Meezan, N. B.; Merrill, F.; Nikroo, A.; Peterson, R. R.; Rice, N.; Sater, J. D.; Shah, R. C.; Stadermann, M.; Volegov, P.; Walters, C.; Wilson, D. C.
2018-05-01
Liquid layer implosions using the "wetted foam" technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. We report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ˜ 12, but we observe a significant discrepancy at CR ˜ 20. This may be due to suppressed hot-spot formation or an anomalous energy loss mechanism.
Supersonic coal water slurry fuel atomizer
Becker, Frederick E.; Smolensky, Leo A.; Balsavich, John
1991-01-01
A supersonic coal water slurry atomizer utilizing supersonic gas velocities to atomize coal water slurry is provided wherein atomization occurs externally of the atomizer. The atomizer has a central tube defining a coal water slurry passageway surrounded by an annular sleeve defining an annular passageway for gas. A converging/diverging section is provided for accelerating gas in the annular passageway to supersonic velocities.
Numerical Modeling of Fuel Injection into an Accelerating, Turning Flow with a Cavity
NASA Astrophysics Data System (ADS)
Colcord, Ben James
Deliberate continuation of the combustion in the turbine passages of a gas turbine engine has the potential to increase the efficiency and the specific thrust or power of current gas-turbine engines. This concept, known as a turbine-burner, must overcome many challenges before becoming a viable product. One major challenge is the injection, mixing, ignition, and burning of fuel within a short residence time in a turbine passage characterized by large three-dimensional accelerations. One method of increasing the residence time is to inject the fuel into a cavity adjacent to the turbine passage, creating a low-speed zone for mixing and combustion. This situation is simulated numerically, with the turbine passage modeled as a turning, converging channel flow of high-temperature, vitiated air adjacent to a cavity. Both two- and three-dimensional, reacting and non-reacting calculations are performed, examining the effects of channel curvature and convergence, fuel and additional air injection configurations, and inlet conditions. Two-dimensional, non-reacting calculations show that higher aspect ratio cavities improve the fluid interaction between the channel flow and the cavity, and that the cavity dimensions are important for enhancing the mixing. Two-dimensional, reacting calculations show that converging channels improve the combustion efficiency. Channel curvature can be either beneficial or detrimental to combustion efficiency, depending on the location of the cavity and the fuel and air injection configuration. Three-dimensional, reacting calculations show that injecting fuel and air so as to disrupt the natural motion of the cavity stimulates three-dimensional instability and improves the combustion efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, Adam; Merzari, Elia; Sofu, Tanju
2016-08-01
High-fidelity analysis has been utilized in the design of beam target options for an accelerator driven subcritical system. Designs featuring stacks of plates with square cross section have been investigated for both tungsten and uranium target materials. The presented work includes the first thermal-hydraulic simulations of the full, detailed target geometry. The innovative target cooling manifold design features many regions with complex flow features, including 90 bends and merging jets, which necessitate three-dimensional fluid simulations. These were performed using the commercial computational fluid dynamics code STAR-CCM+. Conjugate heat transfer was modeled between the plates, cladding, manifold structure, and fluid. Steady-statemore » simulations were performed but lacked good residual convergence. Unsteady simulations were then performed, which converged well and demonstrated that flow instability existed in the lower portion of the manifold. It was established that the flow instability had little effect on the peak plate temperatures, which were well below the melting point. The estimated plate surface temperatures and target region pressure were shown to provide sufficient margin to subcooled boiling for standard operating conditions. This demonstrated the safety of both potential target configurations during normal operation.« less
Stable Adaptive Inertial Control of a Doubly-Fed Induction Generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Moses; Muljadi, Eduard; Hur, Kyeon
2016-11-01
This paper proposes a stable adaptive inertial control scheme of a doubly-fed induction generator. The proposed power reference is defined in two sections: the deceleration period and the acceleration period. The power reference in the deceleration period consists of a constant and the reference for maximum power point tracking (MPPT) operation. The latter contributes to preventing a second frequency dip (SFD) in this period because its reduction rate is large at the early stage of an event but quickly decreases with time. To improve the frequency nadir (FN), the constant value is set to be proportional to the rotor speedmore » prior to an event. The reference ensures that the rotor speed converges to a stable operating region. To accelerate the rotor speed while causing a small SFD, when the rotor speed converges, the power reference is reduced by a small amount and maintained until it meets the MPPT reference. The results show that the scheme causes a small SFD while improving the FN and the rate of change of frequency in any wind conditions, even in a grid that has a high penetration of wind power.« less
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
Visualizing and improving the robustness of phase retrieval algorithms
Tripathi, Ashish; Leyffer, Sven; Munson, Todd; ...
2015-06-01
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Visualizing and improving the robustness of phase retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; Leyffer, Sven; Munson, Todd
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko
2015-06-01
Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Linear-scaling explicitly correlated treatment of solids: periodic local MP2-F12 method.
Usvyat, Denis
2013-11-21
Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.
Bridging the gap between high and low acceleration for planetary escape
NASA Astrophysics Data System (ADS)
Indrikis, Janis; Preble, Jeffrey C.
With the exception of the often time consuming analysis by numerical optimization, no single orbit transfer analysis technique exists that can be applied over a wide range of accelerations. Using the simple planetary escape (parabolic trajectory) mission some of the more common techniques are considered as the limiting bastions at the high and the extremely low acceleration regimes. The brachistochrone, the minimum time of flight path, is proposed as the technique to bridge the gap between the high and low acceleration regions, providing a smooth bridge over the entire acceleration spectrum. A smooth and continuous velocity requirement is established for the planetary escape mission. By using these results, it becomes possible to determine the effect of finite accelerations on mission performance and target propulsion and power system designs which are consistent with a desired mission objective.
NASA Technical Reports Server (NTRS)
Fromme, J.; Golberg, M.; Werth, J.
1979-01-01
The numerical computation of unsteady airloads acting upon thin airfoils with multiple leading and trailing-edge controls in two-dimensional ventilated subsonic wind tunnels is studied. The foundation of the computational method is strengthened with a new and more powerful mathematical existence and convergence theory for solving Cauchy singular integral equations of the first kind, and the method of convergence acceleration by extrapolation to the limit is introduced to analyze airfoils with flaps. New results are presented for steady and unsteady flow, including the effect of acoustic resonance between ventilated wind-tunnel walls and airfoils with oscillating flaps. The computer program TWODI is available for general use and a complete set of instructions is provided.
Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1995-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e., the aspect-ratio AR = delta y/delta x is much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotopic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Coarsening Strategies for Unstructured Multigrid Techniques with Application to Anisotropic Problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1996-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e. the aspect-ratio AR = (delta)y/(delta)x much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V
2014-02-01
The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.
CUDA GPU based full-Stokes finite difference modelling of glaciers
NASA Astrophysics Data System (ADS)
Brædstrup, C. F.; Egholm, D. L.
2012-04-01
Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2011-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Variable convergence liquid layer implosions on the National Ignition Facility
Zylstra, A. B.; Yi, S. A.; Haines, B. M.; ...
2018-03-19
Liquid layer implosions using the “wetted foam” technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. In this paper, we report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ~ 12, but we observe a significant discrepancy at CR ~ 20. Finally, this may be due tomore » suppressed hot-spot formation or an anomalous energy loss mechanism.« less
Variable convergence liquid layer implosions on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zylstra, A. B.; Yi, S. A.; Haines, B. M.
Liquid layer implosions using the “wetted foam” technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. In this paper, we report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ~ 12, but we observe a significant discrepancy at CR ~ 20. Finally, this may be due tomore » suppressed hot-spot formation or an anomalous energy loss mechanism.« less
Development of higher-order modal methods for transient thermal and structural analysis
NASA Technical Reports Server (NTRS)
Camarda, Charles J.; Haftka, Raphael T.
1989-01-01
A force-derivative method which produces higher-order modal solutions to transient problems is evaluated. These higher-order solutions converge to an accurate response using fewer degrees-of-freedom (eigenmodes) than lower-order methods such as the mode-displacement or mode-acceleration methods. Results are presented for non-proportionally damped structural problems as well as thermal problems modeled by finite elements.
CPTAC Announces New PTRCs, PCCs, and PGDACs | Office of Cancer Clinical Proteomics Research
This week, the Office of Cancer Clinical Proteomics Research (OCCPR) at the National Cancer Institute (NCI), part of the National Institutes of Health, announced its aim to further the convergence of proteomics with genomics – “proteogenomics,” to better understand the molecular basis of cancer and accelerate research in these areas by disseminating research resources to the scientific community.
Indole synthesis by palladium-catalyzed tandem allylic isomerization - furan Diels-Alder reaction.
Xu, Jie; Wipf, Peter
2017-08-30
A Pd(0)-catalyzed elimination of an allylic acetate generates a π-allyl complex that is postulated to initiate a novel intramolecular Diels-Alder cycloaddition to a tethered furan (IMDAF). Under the reaction conditions, this convergent, microwave-accelerated cascade process provides substituted indoles in moderate to good yields after Pd-hydride elimination, aromatization by dehydration, and in situ N-Boc cleavage.
Neural Networks for Modeling and Control of Particle Accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
Neural Networks for Modeling and Control of Particle Accelerators
NASA Astrophysics Data System (ADS)
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; Edstrom, D.; Milton, S. V.; Stabile, P.
2016-04-01
Particle accelerators are host to myriad nonlinear and complex physical phenomena. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems, as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. The purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.
Neural Networks for Modeling and Control of Particle Accelerators
Edelen, A. L.; Biedron, S. G.; Chase, B. E.; ...
2016-04-01
Myriad nonlinear and complex physical phenomena are host to particle accelerators. They often involve a multitude of interacting systems, are subject to tight performance demands, and should be able to run for extended periods of time with minimal interruptions. Often times, traditional control techniques cannot fully meet these requirements. One promising avenue is to introduce machine learning and sophisticated control techniques inspired by artificial intelligence, particularly in light of recent theoretical and practical advances in these fields. Within machine learning and artificial intelligence, neural networks are particularly well-suited to modeling, control, and diagnostic analysis of complex, nonlinear, and time-varying systems,more » as well as systems with large parameter spaces. Consequently, the use of neural network-based modeling and control techniques could be of significant benefit to particle accelerators. For the same reasons, particle accelerators are also ideal test-beds for these techniques. Moreover, many early attempts to apply neural networks to particle accelerators yielded mixed results due to the relative immaturity of the technology for such tasks. For the purpose of this paper is to re-introduce neural networks to the particle accelerator community and report on some work in neural network control that is being conducted as part of a dedicated collaboration between Fermilab and Colorado State University (CSU). We also describe some of the challenges of particle accelerator control, highlight recent advances in neural network techniques, discuss some promising avenues for incorporating neural networks into particle accelerator control systems, and describe a neural network-based control system that is being developed for resonance control of an RF electron gun at the Fermilab Accelerator Science and Technology (FAST) facility, including initial experimental results from a benchmark controller.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.
2014-07-01
DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less
Economic Load Dispatch Using Adaptive Social Acceleration Constant Based Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Jain, N. K.; Nangia, Uma; Jain, Jyoti
2018-04-01
In this paper, an Adaptive Social Acceleration Constant based Particle Swarm Optimization (ASACPSO) has been developed which uses the best value of social acceleration constant (Csg). Three formulations of Csg have been used to search for the best value of Csg. These three formulations led to the development of three algorithms-ALDPSO, AELDPSO-I and AELDPSO-II which were implemented for Economic Load Dispatch of IEEE 5 bus, 14 bus and 30 bus systems. The best value of Csg was selected based on the minimum number of Kounts i.e. number of function evaluations required to minimize the function. This value of Csg was directly used in basic PSO algorithm which led to the development of ASACPSO algorithm. ASACPSO was found to converge faster and give more accurate results compared to BPSO for IEEE 5, 14 and 30 bus systems.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awe, Thomas James; Peterson, Kyle J.; Yu, Edmund P.
Enhanced implosion stability has been experimentally demonstrated for magnetically accelerated liners that are coated with 70 μm of dielectric. The dielectric tamps liner-mass redistribution from electrothermal instabilities and also buffers coupling of the drive magnetic field to the magneto-Rayleigh-Taylor instability. A dielectric-coated and axially premagnetized beryllium liner was radiographed at a convergence ratio [CR=R in,0/R in(z,t)] of 20, which is the highest CR ever directly observed for a strengthless magnetically driven liner. Lastly, the inner-wall radius R in(z,t) displayed unprecedented uniformity, varying from 95 to 130 μm over the 4.0 mm axial height captured by the radiograph.
Awe, Thomas James; Peterson, Kyle J.; Yu, Edmund P.; ...
2016-02-10
Enhanced implosion stability has been experimentally demonstrated for magnetically accelerated liners that are coated with 70 μm of dielectric. The dielectric tamps liner-mass redistribution from electrothermal instabilities and also buffers coupling of the drive magnetic field to the magneto-Rayleigh-Taylor instability. A dielectric-coated and axially premagnetized beryllium liner was radiographed at a convergence ratio [CR=R in,0/R in(z,t)] of 20, which is the highest CR ever directly observed for a strengthless magnetically driven liner. Lastly, the inner-wall radius R in(z,t) displayed unprecedented uniformity, varying from 95 to 130 μm over the 4.0 mm axial height captured by the radiograph.
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley
1995-01-01
Underintegrated methods are investigated with respect to their stability and convergence properties. The focus was on identifying regions where they work and regions where techniques such as hourglass viscosity and hourglass control can be used. Results obtained show that underintegrated methods typically lead to finite element stiffness with spurious modes in the solution. However, problems exist (scalar elliptic boundary value problems) where underintegrated with hourglass control yield convergent solutions. Also, stress averaging in underintegrated stiffness calculations does not necessarily lead to stable or convergent stress states.
Community-level education accelerates the cultural evolution of fertility decline.
Colleran, Heidi; Jasienska, Grazyna; Nenko, Ilona; Galbarczyk, Andrzej; Mace, Ruth
2014-03-22
Explaining why fertility declines as populations modernize is a profound theoretical challenge. It remains unclear whether the fundamental drivers are economic or cultural in nature. Cultural evolutionary theory suggests that community-level characteristics, for example average education, can alter how low-fertility preferences are transmitted and adopted. These assumptions have not been empirically tested. Here, we show that community-level education accelerates fertility decline in a way that is neither predicted by individual characteristics, nor by the level of economic modernization in a population. In 22 high-fertility communities in Poland, fertility converged on a smaller family size as average education in the community increased-indeed community-level education had a larger impact on fertility decline than did individual education. This convergence was not driven by educational levels being more homogeneous, but by less educated women having fewer children than expected, and more highly educated social networks, when living among more highly educated neighbours. The average level of education in a community may influence the social partners women interact with, both within and beyond their immediate social environments, altering the reproductive norms they are exposed to. Given a critical mass of highly educated women, less educated neighbours may adopt their reproductive behaviour, accelerating the pace of demographic transition. Individual characteristics alone cannot capture these dynamics and studies relying solely on them may systematically underestimate the importance of cultural transmission in driving fertility declines. Our results are inconsistent with a purely individualistic, rational-actor model of fertility decline and suggest that optimization of reproduction is partly driven by cultural dynamics beyond the individual.
Community-level education accelerates the cultural evolution of fertility decline
Colleran, Heidi; Jasienska, Grazyna; Nenko, Ilona; Galbarczyk, Andrzej; Mace, Ruth
2014-01-01
Explaining why fertility declines as populations modernize is a profound theoretical challenge. It remains unclear whether the fundamental drivers are economic or cultural in nature. Cultural evolutionary theory suggests that community-level characteristics, for example average education, can alter how low-fertility preferences are transmitted and adopted. These assumptions have not been empirically tested. Here, we show that community-level education accelerates fertility decline in a way that is neither predicted by individual characteristics, nor by the level of economic modernization in a population. In 22 high-fertility communities in Poland, fertility converged on a smaller family size as average education in the community increased—indeed community-level education had a larger impact on fertility decline than did individual education. This convergence was not driven by educational levels being more homogeneous, but by less educated women having fewer children than expected, and more highly educated social networks, when living among more highly educated neighbours. The average level of education in a community may influence the social partners women interact with, both within and beyond their immediate social environments, altering the reproductive norms they are exposed to. Given a critical mass of highly educated women, less educated neighbours may adopt their reproductive behaviour, accelerating the pace of demographic transition. Individual characteristics alone cannot capture these dynamics and studies relying solely on them may systematically underestimate the importance of cultural transmission in driving fertility declines. Our results are inconsistent with a purely individualistic, rational-actor model of fertility decline and suggest that optimization of reproduction is partly driven by cultural dynamics beyond the individual. PMID:24500166
Experiences with Markov Chain Monte Carlo Convergence Assessment in Two Psychometric Examples
ERIC Educational Resources Information Center
Sinharay, Sandip
2004-01-01
There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the…
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
Convergence of the Graph Allen-Cahn Scheme
NASA Astrophysics Data System (ADS)
Luo, Xiyang; Bertozzi, Andrea L.
2017-05-01
The graph Laplacian and the graph cut problem are closely related to Markov random fields, and have many applications in clustering and image segmentation. The diffuse interface model is widely used for modeling in material science, and can also be used as a proxy to total variation minimization. In Bertozzi and Flenner (Multiscale Model Simul 10(3):1090-1118, 2012), an algorithm was developed to generalize the diffuse interface model to graphs to solve the graph cut problem. This work analyzes the conditions for the graph diffuse interface algorithm to converge. Using techniques from numerical PDE and convex optimization, monotonicity in function value and convergence under an a posteriori condition are shown for a class of schemes under a graph-independent stepsize condition. We also generalize our results to incorporate spectral truncation, a common technique used to save computation cost, and also to the case of multiclass classification. Various numerical experiments are done to compare theoretical results with practical performance.
A systematic FPGA acceleration design for applications based on convolutional neural networks
NASA Astrophysics Data System (ADS)
Dong, Hao; Jiang, Li; Li, Tianjian; Liang, Xiaoyao
2018-04-01
Most FPGA accelerators for convolutional neural network are designed to optimize the inner acceleration and are ignored of the optimization for the data path between the inner accelerator and the outer system. This could lead to poor performance in applications like real time video object detection. We propose a brand new systematic FPFA acceleration design to solve this problem. This design takes the data path optimization between the inner accelerator and the outer system into consideration and optimizes the data path using techniques like hardware format transformation, frame compression. It also takes fixed-point, new pipeline technique to optimize the inner accelerator. All these make the final system's performance very good, reaching about 10 times the performance comparing with the original system.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; Center Advanced Studies of Accelerators Collaboration
2014-03-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems.
Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.
2013-01-01
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537
Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E
2013-04-18
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.
2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation
NASA Astrophysics Data System (ADS)
Proctor, Camron Lisle
The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.
Trescott, Peter C.; Pinder, George Francis; Larson, S.P.
1976-01-01
The model will simulate ground-water flow in an artesian aquifer, a water-table aquifer, or a combined artesian and water-table aquifer. The aquifer may be heterogeneous and anisotropic and have irregular boundaries. The source term in the flow equation may include well discharge, constant recharge, leakage from confining beds in which the effects of storage are considered, and evapotranspiration as a linear function of depth to water. The theoretical development includes presentation of the appropriate flow equations and derivation of the finite-difference approximations (written for a variable grid). The documentation emphasizes the numerical techniques that can be used for solving the simultaneous equations and describes the results of numerical experiments using these techniques. Of the three numerical techniques available in the model, the strongly implicit procedure, in general, requires less computer time and has fewer numerical difficulties than do the iterative alternating direction implicit procedure and line successive overrelaxation (which includes a two-dimensional correction procedure to accelerate convergence). The documentation includes a flow chart, program listing, an example simulation, and sections on designing an aquifer model and requirements for data input. It illustrates how model results can be presented on the line printer and pen plotters with a program that utilizes the graphical display software available from the Geological Survey Computer Center Division. In addition the model includes options for reading input data from a disk and writing intermediate results on a disk.
Process modelling for materials preparation experiments
NASA Technical Reports Server (NTRS)
Rosenberger, Franz; Alexander, J. Iwan D.
1994-01-01
The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.
Studying the precision of ray tracing techniques with Szekeres models
NASA Astrophysics Data System (ADS)
Koksbang, S. M.; Hannestad, S.
2015-07-01
The simplest standard ray tracing scheme employing the Born and Limber approximations and neglecting lens-lens coupling is used for computing the convergence along individual rays in mock N-body data based on Szekeres swiss cheese and onion models. The results are compared with the exact convergence computed using the exact Szekeres metric combined with the Sachs formalism. A comparison is also made with an extension of the simple ray tracing scheme which includes the Doppler convergence. The exact convergence is reproduced very precisely as the sum of the gravitational and Doppler convergences along rays in Lemaitre-Tolman-Bondi swiss cheese and single void models. This is not the case when the swiss cheese models are based on nonsymmetric Szekeres models. For such models, there is a significant deviation between the exact and ray traced paths and hence also the corresponding convergences. There is also a clear deviation between the exact and ray tracing results obtained when studying both nonsymmetric and spherically symmetric Szekeres onion models.
NASA Astrophysics Data System (ADS)
LI, Q.; Lee, S.
2016-12-01
The relationship between Antarctic Circumpolar Current (ACC) jets and eddy fluxes in the Indo-western Pacific Southern Ocean (90°E-145°E) is investigated using an eddy-resolving model. In this region, transient eddy momentum flux convergence occurs at the latitude of the primary jet core, whereas eddy buoyancy flux is located over a broader region that encompasses the jet and the inter-jet minimum. In a small sector (120°E-144°E) where jets are especially zonal, a spatial and temporal decomposition of the eddy fluxes further reveals that fast eddies act to accelerate the jet with the maximum eddy momentum flux convergence at the jet center, while slow eddies tend to decelerate the zonal current at the inter-jet minimum. Transformed Eulerian mean (TEM) diagnostics reveals that the eddy momentum contribution accelerates the jets at all model depths, whereas the buoyancy flux contribution decelerates the jets at depths below 600 m. In ocean sectors where the jets are relatively well defined, there exist jet-scale overturning circulations (JSOC) with sinking motion on the equatorward flank, and rising motion on the poleward flank of the jets. The location and structure of these thermally indirect circulations suggest that they are driven by the eddy momentum flux convergence, much like the Ferrel cell in the atmosphere. This study also found that the JSOC plays a significant role in the oceanic heat transport and that it also contributes to the formation of a thin band of mixed layer that exists on the equatorward flank of the Indo-western Pacific ACC jets.
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Rahmim, Arman
2014-03-01
Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
An improved NAS-RIF algorithm for image restoration
NASA Astrophysics Data System (ADS)
Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian
2016-10-01
Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.
Accelerated Learning Techniques for the Foreign Language Class: A Personal View.
ERIC Educational Resources Information Center
Bancroft, W. Jane
Foreign language instructors cope with problems of learner anxiety in the classroom, fossilization of language use and language skill loss. Relaxation and concentration techniques can alleviate stress and fatigue and improve students' capabilities. Three categories of accelerated learning techniques are: (1) those that serve as a preliminary to…
Numerical phase retrieval from beam intensity measurements in three planes
NASA Astrophysics Data System (ADS)
Bruel, Laurent
2003-05-01
A system and method have been developed at CEA to retrieve phase information from multiple intensity measurements along a laser beam. The device has been patented. Commonly used devices for beam measurement provide phase and intensity information separately or with a rather poor resolution whereas the MIROMA method provides both at the same time, allowing direct use of the results in numerical models. Usual phase retrieval algorithms use two intensity measurements, typically the image plane and the focal plane (Gerschberg-Saxton algorithm) related by a Fourier transform, or the image plane and a lightly defocus plane (D.L. Misell). The principal drawback of such iterative algorithms is their inability to provide unambiguous convergence in all situations. The algorithms can stagnate on bad solutions and the error between measured and calculated intensities remains unacceptable. If three planes rather than two are used, the data redundancy created confers to the method good convergence capability and noise immunity. It provides an excellent agreement between intensity determined from the retrieved phase data set in the image plane and intensity measurements in any diffraction plane. The method employed for MIROMA is inspired from GS algorithm, replacing Fourier transforms by a beam-propagating kernel with gradient search accelerating techniques and special care for phase branch cuts. A fast one dimensional algorithm provides an initial guess for the iterative algorithm. Applications of the algorithm on synthetic data find out the best reconstruction planes that have to be chosen. Robustness and sensibility are evaluated. Results on collimated and distorted laser beams are presented.
Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation
NASA Astrophysics Data System (ADS)
Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.
2010-02-01
Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.
Modeling of ion acceleration through drift and diffusion at interplanetary shocks
NASA Technical Reports Server (NTRS)
Decker, R. B.; Vlahos, L.
1986-01-01
A test particle simulation designed to model ion acceleration through drift and diffusion at interplanetary shocks is described. The technique consists of integrating along exact particle orbits in a system where the angle between the shock normal and mean upstream magnetic field, the level of magnetic fluctuations, and the energy of injected particles can assume a range of values. The technique makes it possible to study time-dependent shock acceleration under conditions not amenable to analytical techniques. To illustrate the capability of the numerical model, proton acceleration was considered under conditions appropriate for interplanetary shocks at 1 AU, including large-amplitude transverse magnetic fluctuations derived from power spectra of both ambient and shock-associated MHD waves.
Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.
2008-12-01
To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.
NASA Astrophysics Data System (ADS)
Vernant, P.; Bilham, R.; Szeliga, W.; Drupka, D.; Kalita, S.; Bhattacharyya, A. K.; Gaur, V. K.; Pelgay, P.; Cattin, R.; Berthet, T.
2014-08-01
GPS data reveal that the Brahmaputra Valley has broken from the Indian Plate and rotates clockwise relative to India about a point a few hundred kilometers west of the Shillong Plateau. The GPS velocity vectors define two distinct blocks separated by the Kopili fault upon which 2-3 mm/yr of dextral slip is observed: the Shillong block between longitudes 89 and 93°E rotating clockwise at 1.15°/Myr and the Assam block from 93.5°E to 97°E rotating at ≈1.13°/Myr. These two blocks are more than 120 km wide in a north-south sense, but they extend locally a similar distance beneath the Himalaya and Tibet. A result of these rotations is that convergence across the Himalaya east of Sikkim decreases in velocity eastward from 18 to ≈12 mm/yr and convergence between the Shillong Plateau and Bangladesh across the Dauki fault increases from 3 mm/yr in the west to >8 mm/yr in the east. This fast convergence rate is inconsistent with inferred geological uplift rates on the plateau (if a 45°N dip is assumed for the Dauki fault) unless clockwise rotation of the Shillong block has increased substantially in the past 4-8 Myr. Such acceleration is consistent with the reported recent slowing in the convergence rate across the Bhutan Himalaya. The current slip potential near Bhutan, based on present-day convergence rates and assuming no great earthquake since 1713 A.D., is now ~5.4 m, similar to the slip reported from alluvial terraces that offsets across the Main Himalayan Thrust and sufficient to sustain a Mw ≥ 8.0 earthquake in this area.
Accelerator controls at CERN: Some converging trends
NASA Astrophysics Data System (ADS)
Kuiper, B.
1990-08-01
CERN's growing services to the high-energy physics community using frozen resources has led to the implementation of "Technical Boards", mandated to assist the management by making recommendations for rationalizations in various technological domains. The Board on Process Control and Electronics for Accelerators, TEBOCO, has emphasized four main lines which might yield economy in resources. First, a common architecture for accelerator controls has been agreed between the three accelerator divisions. Second, a common hardware/software kit has been defined, from which the large majority of future process interfacing may be composed. A support service for this kit is an essential part of the plan. Third, high-level protocols have been developed for standardizing access to process devices. They derive from agreed standard models of the devices and involve a standard control message. This should ease application development and mobility of equipment. Fourth, a common software engineering methodology and a commercial package of application development tools have been adopted. Some rationalization in the field of the man-machine interface and in matters of synchronization is also under way.
Marshall Convergent Spray Formulation Improvement for High Temperatures
NASA Technical Reports Server (NTRS)
Scarpa, Jack; Patterson,Chat
2011-01-01
The Marshall Convergent Coating-1 (MCC-1) formulation was produced in the 1990s, and uses a standard bisphenol A epoxy resin system with a triamine accelerator. With the increasing heat rates forecast for the next generation of vehicles, higher-temperature sprayable coatings are needed. This work substitutes the low-temperature epoxy resins used in the MCC-1 coating with epoxy phenolic, epoxy novalac, or resorcinolinic resins (higher carbon content), which will produce a higher char yield upon exposure to high heat and increased glass transition temperature. High-temperature filler materials, such as granular cork and glass ecospheres, are also incorporated as part of the convergent spray process, but other sacrificial (ablative) materials are possible. In addition, the use of polyhedral oligomeric silsesquioxanes (POSS) nanoparticle hybrids will increase both reinforcement aspects and contribute to creating a tougher silacious char, which will reduce recession at higher heat rates. Use of expanding epoxy resin (lightweight MCC) systems are also useful in that they reduce system weight, have greater insulative properties, and a decrease in application times can be realized.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
NBIC-Convergence as a Paradigm Platform of Sustainable Development
NASA Astrophysics Data System (ADS)
Dotsenko, Elena
2017-11-01
Today, the fastest rates of scientific and technological development are typical for the spheres of nano-systems and materials industry, information and communication systems, as well as spheres of direct human impact on environment - power industry, urbanization, and industrial infrastructure. Accelerate replacement of a human by machines and robots, the construction of megacities; the transportation of huge volumes of environmentally hazardous goods takes place against the background of intensive generation of knowledge, the transition of the results of fundamental research into specific production technologies. In this process, on the one hand, a fundamentally new format for technological restructuring of the world economy is being developed. On the other hand, a new platform for human-environment interaction is being formed, where both positive and negative environmental impacts will be determined by unstudied factors in the near future. The reason for this is in the forthcoming replacement of the technologies that are familiar to us, although dynamically developing, by fundamentally new - convergent. Entering the front line of technological development - NBIC-convergence - requires a new paradigm of sustainable development.
Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu
2017-06-17
Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.
NASA Technical Reports Server (NTRS)
Wilcox, Eric M.; Lau, K. M.; Kim, Kyu-Myong
2010-01-01
The influence on the summertime North Atlantic Ocean inter-tropical convergence zone (ITCZ) of Saharan dust outbreaks is explored using nine years of continuous satellite observations and atmospheric reanalysis products. During dust outbreak events rainfall along the ITCZ shifts northward by 1 to 4 degrees latitude. Dust outbreaks coincide with warmer lower-tropospheric temperatures compared to low dust conditions, which is attributable to advection of the warm Saharan Air Layer, enhanced subtropical subsidence, and radiative heating of dust. The enhanced positive meridional temperature gradient coincident with dust outbreaks is accompanied by an acceleration of the easterly winds on the n011h side of the African Easterly Jet (AEJ). The center of the positive vorticity region south of the AEJ moves north drawing the center of low-level convergence and ITCZ rainfall northward with it. The enhanced precipitation on the north side of the ITCZ occurs in spite of widespread sea surface temperature cooling north of the ITCZ owing to reduced surface solar insolation by dust scattering.
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.
Botello-Smith, Wesley M.; Luo, Ray
2016-01-01
Continuum solvent models have been widely used in biomolecular modeling applications. Recently much attention has been given to inclusion of implicit membrane into existing continuum Poisson-Boltzmann solvent models to extend their applications to membrane systems. Inclusion of an implicit membrane complicates numerical solutions of the underlining Poisson-Boltzmann equation due to the dielectric inhomogeneity on the boundary surfaces of a computation grid. This can be alleviated by the use of the periodic boundary condition, a common practice in electrostatic computations in particle simulations. The conjugate gradient and successive over-relaxation methods are relatively straightforward to be adapted to periodic calculations, but their convergence rates are quite low, limiting their applications to free energy simulations that require a large number of conformations to be processed. To accelerate convergence, the Incomplete Cholesky preconditioning and the geometric multi-grid methods have been extended to incorporate periodicity for biomolecular applications. Impressive convergence behaviors were found as in the previous applications of these numerical methods to tested biomolecules and MMPBSA calculations. PMID:26389966
Efficient mixing scheme for self-consistent all-electron charge density
NASA Astrophysics Data System (ADS)
Shishidou, Tatsuya; Weinert, Michael
2015-03-01
In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.
NASA Astrophysics Data System (ADS)
Neuhoff, John G.
2003-04-01
Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.
Cockayne syndrome group A and B proteins converge on transcription-linked resolution of non-B DNA.
Scheibye-Knudsen, Morten; Tseng, Anne; Borch Jensen, Martin; Scheibye-Alsing, Karsten; Fang, Evandro Fei; Iyama, Teruaki; Bharti, Sanjay Kumar; Marosi, Krisztina; Froetscher, Lynn; Kassahun, Henok; Eckley, David Mark; Maul, Robert W; Bastian, Paul; De, Supriyo; Ghosh, Soumita; Nilsen, Hilde; Goldberg, Ilya G; Mattson, Mark P; Wilson, David M; Brosh, Robert M; Gorospe, Myriam; Bohr, Vilhelm A
2016-11-01
Cockayne syndrome is a neurodegenerative accelerated aging disorder caused by mutations in the CSA or CSB genes. Although the pathogenesis of Cockayne syndrome has remained elusive, recent work implicates mitochondrial dysfunction in the disease progression. Here, we present evidence that loss of CSA or CSB in a neuroblastoma cell line converges on mitochondrial dysfunction caused by defects in ribosomal DNA transcription and activation of the DNA damage sensor poly-ADP ribose polymerase 1 (PARP1). Indeed, inhibition of ribosomal DNA transcription leads to mitochondrial dysfunction in a number of cell lines. Furthermore, machine-learning algorithms predict that diseases with defects in ribosomal DNA (rDNA) transcription have mitochondrial dysfunction, and, accordingly, this is found when factors involved in rDNA transcription are knocked down. Mechanistically, loss of CSA or CSB leads to polymerase stalling at non-B DNA in a neuroblastoma cell line, in particular at G-quadruplex structures, and recombinant CSB can melt G-quadruplex structures. Indeed, stabilization of G-quadruplex structures activates PARP1 and leads to accelerated aging in Caenorhabditis elegans In conclusion, this work supports a role for impaired ribosomal DNA transcription in Cockayne syndrome and suggests that transcription-coupled resolution of secondary structures may be a mechanism to repress spurious activation of a DNA damage response.
Cheng, Jian-jun; Xin, Guo-Wei; Zhi, Ling-yan; Jiang, Fu-qiang
2017-01-01
Wind-shield walls decrease the velocity of wind-drift sand flow in transit. This results in sand accumulating in the wind-shadow zone of both windshield wall and track line, causing severe sand sediment hazard. This study reveals the characteristics of sand accumulation and the laws of wind-blown sand removal in the wind-shadow areas of three different types of windshield walls, utilizing three-dimensional numerical simulations and wind tunnel experiments and on-site sand sediment tests. The results revealed the formation of apparent vortex and acceleration zones on the leeward side of solid windshield walls. For uniform openings, the vortex area moved back and narrowed. When bottom-opening windshield walls were adopted, the track-supporting layer at the step became a conflux acceleration zone, forming a low velocity vortex zone near the track line. At high wind speeds, windshield walls with bottom-openings achieved improved sand dredging. Considering hydrodynamic mechanisms, the flow field structure on the leeward side of different types of windshield structures is a result of convergence and diffusion of fluids caused by an obstacle. This convergence and diffusion effect of air fluid is more apparent at high wind velocities, but not obvious at low wind velocities. PMID:28120915
Comparison Tools for Assessing the Microgravity Environment of Missions, Carriers and Conditions
NASA Technical Reports Server (NTRS)
DeLombard, Richard; McPherson, Kevin; Moskowitz, Milton; Hrovat, Ken
1997-01-01
The Principal Component Spectral Analysis and the Quasi-steady Three-dimensional Histogram techniques provide the means to describe the microgravity acceleration environment of an entire mission on a single plot. This allows a straight forward comparison of the microgravity environment between missions, carriers, and conditions. As shown in this report, the PCSA and QTH techniques bring both the range and median of the microgravity environment onto a single page for an entire mission or another time period or condition of interest. These single pages may then be used to compare similar analyses of other missions, time periods or conditions. The PCSA plot is based on the frequency distribution of the vibrational energy and is normally used for an acceleration data set containing frequencies above the lowest natural frequencies of the vehicle. The QTH plot is based on the direction and magnitude of the acceleration and is normally used for acceleration data sets with frequency content less than 0.1 Hz. Various operating conditions are made evident by using PCSA and QTH plots. Equipment operating either full or part time with sufficient magnitude to be considered a disturbance is very evident as well as equipment contributing to the background acceleration environment. A source's magnitude and/or frequency variability is also evident by the source's appearance on a PCSA plot. The PCSA and QTH techniques are valuable tools for extracting useful information from acceleration data taken over large spans of time. This report shows that these techniques provide a tool for comparison between different sets of microgravity acceleration data, for example different missions, different activities within a mission, and/or different attitudes within a mission. These techniques, as well as others, may be employed in order to derive useful information from acceleration data.
2015-06-12
money laundering operations that support criminal and terrorist organizations. Transnational organizations transcend the borders and operate globally...Modlin Thesis Title: The Threat of Convergence of Terror Groups with Transnational Criminal Organizations in Order to Utilize Existing Smuggling
A successive overrelaxation iterative technique for an adaptive equalizer
NASA Technical Reports Server (NTRS)
Kosovych, O. S.
1973-01-01
An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.
GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data
NASA Technical Reports Server (NTRS)
VanHelleputte, Tom; Visser, Pieter
2007-01-01
Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.
Assessing the validity of discourse analysis: transdisciplinary convergence
NASA Astrophysics Data System (ADS)
Jaipal-Jamani, Kamini
2014-12-01
Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.
Intensification and refraction of acoustical signals in partially choked converging ducts
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.
1980-01-01
A computer code based on the wave-envelope technique is used to perform detailed numerical calculations for the intensification and refraction of sound in converging hard walled and lined circular ducts carrying high mean Mach number flows. The results show that converging ducts produce substantial refractions toward the duct center for waves propagating against near choked flows. As expected, the magnitude of the refraction decreases as the real part of the admittance increases. The pressure wave pattern is that of interference among the different modes, and hence the variation of the magnitude of pressure refraction with frequency is not monotonic.
A three-dimensional wide-angle BPM for optical waveguide structures.
Ma, Changbao; Van Keuren, Edward
2007-01-22
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
A three-dimensional wide-angle BPM for optical waveguide structures
NASA Astrophysics Data System (ADS)
Ma, Changbao; van Keuren, Edward
2007-01-01
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
Zhen, Xin; Zhou, Ling-hong; Lu, Wen-ting; Zhang, Shu-xu; Zhou, Lu
2010-12-01
To validate the efficiency and accuracy of an improved Demons deformable registration algorithm and evaluate its application in contour recontouring in 4D-CT. To increase the additional Demons force and reallocate the bilateral forces to accelerate convergent speed, we propose a novel energy function as the similarity measure, and utilize a BFGS method for optimization to avoid specifying the numbers of iteration. Mathematical transformed deformable CT images and home-made deformable phantom were used to validate the accuracy of the improved algorithm, and its effectiveness for contour recontouring was tested. The improved algorithm showed a relatively high registration accuracy and speed when compared with the classic Demons algorithm and optical flow based method. Visual inspection of the positions and shapes of the deformed contours agreed well with the physician-drawn contours. Deformable registration is a key technique in 4D-CT, and this improved Demons algorithm for contour recontouring can significantly reduce the workload of the physicians. The registration accuracy of this method proves to be sufficient for clinical needs.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Lee, Y.; Bescond, M.; Logoteta, D.; Cavassilas, N.; Lannoo, M.; Luisier, M.
2018-05-01
We propose an efficient method to quantum mechanically treat anharmonic interactions in the atomistic nonequilibrium Green's function simulation of phonon transport. We demonstrate that the so-called lowest-order approximation, implemented through a rescaling technique and analytically continued by means of the Padé approximants, can be used to accurately model third-order anharmonic effects. Although the paper focuses on a specific self-energy, the method is applicable to a very wide class of physical interactions. We apply this approach to the simulation of anharmonic phonon transport in realistic Si and Ge nanowires with uniform or discontinuous cross sections. The effect of increasing the temperature above 300 K is also investigated. In all the considered cases, we are able to obtain a good agreement with the routinely adopted self-consistent Born approximation, at a remarkably lower computational cost. In the more complicated case of high temperatures (≫300 K), we find that the first-order Richardson extrapolation applied to the sequence of the Padé approximants N -1 /N results in a significant acceleration of the convergence.
Recent advancements in GRACE mascon regularization and uncertainty assessment
NASA Astrophysics Data System (ADS)
Loomis, B. D.; Luthcke, S. B.
2017-12-01
The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.
Speed and convergence properties of gradient algorithms for optimization of IMRT.
Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe
2004-05-01
Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.
Ballistic range experiments on the superboom generated at increasing flight Mach numbers
NASA Technical Reports Server (NTRS)
Sanai, M.; Toong, T.-Y.; Pierce, A. D.
1976-01-01
Ballistic range experiments for the study of the propagation of converging shocks are described and the similarity between the observed phenomenon and that expected for superbooms created by accelerating supersonic aircraft is discussed. For weak shocks (shock Mach numbers of about 1.03), a structure resembling that of a folded shock predicted by geometrical acoustics theory is observed while for stronger shocks, a concave front with enhanced overpressure is recorded. Other results are in general accord with the basic concepts of shock propagation and in conjunction with some theoretical scaling laws indicate that the peak magnification of sonic booms due to aircraft flight acceleration in the real atmosphere should be in the range of 6 to 13.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
Fourier mode analysis of slab-geometry transport iterations in spatially periodic media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E; Zika, M
1999-04-01
We describe a Fourier analysis of the diffusion-synthetic acceleration (DSA) and transport-synthetic acceleration (TSA) iteration schemes for a spatially periodic, but otherwise arbitrarily heterogeneous, medium. Both DSA and TSA converge more slowly in a heterogeneous medium than in a homogeneous medium composed of the volume-averaged scattering ratio. In the limit of a homogeneous medium, our heterogeneous analysis contains eigenvalues of multiplicity two at ''resonant'' wave numbers. In the presence of material heterogeneities, error modes corresponding to these resonant wave numbers are ''excited'' more than other error modes. For DSA and TSA, the iteration spectral radius may occur at these resonantmore » wave numbers, in which case the material heterogeneities most strongly affect iterative performance.« less
Interaction of strong converging shock wave with SF6 gas bubble
NASA Astrophysics Data System (ADS)
Liang, Yu; Zhai, ZhiGang; Luo, XiSheng
2018-06-01
Interaction of a strong converging shock wave with an SF6 gas bubble is studied, focusing on the effects of shock intensity and shock shape on interface evolution. Experimentally, the converging shock wave is generated by shock dynamics theory and the gas bubble is created by soap film technique. The post-shock flow field is captured by a schlieren photography combined with a high-speed video camera. Besides, a three-dimensional program is adopted to provide more details of flow field. After the strong converging shock wave impact, a wide and pronged outward jet, which differs from that in planar shock or weak converging shock condition, is derived from the downstream interface pole. This specific phenomenon is considered to be closely associated with shock intensity and shock curvature. Disturbed by the gas bubble, the converging shocks approaching the convergence center have polygonal shapes, and the relationship between shock intensity and shock radius verifies the applicability of polygonal converging shock theory. Subsequently, the motion of upstream point is discussed, and a modified nonlinear theory considering rarefaction wave and high amplitude effects is proposed. In addition, the effects of shock shape on interface morphology and interface scales are elucidated. These results indicate that the shape as well as shock strength plays an important role in interface evolution.
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
NASA Technical Reports Server (NTRS)
Robertson, F. R.
1984-01-01
The role of cloud related diabatic processes in maintaining the structure of the South Pacific Convergence Zone is discussed. The method chosen to evaluate the condensational heating is a diagnostic cumulus mass flux technique which uses GOES digital IR data to characterize the cloud population. This method requires as input an estimate of time/area mean rainfall rate over the area in question. Since direct observation of rainfall in the South Pacific is not feasible, a technique using GOES IR data is being developed to estimate rainfall amounts for a 2.5 degree grid at 12h intervals.
NASA Technical Reports Server (NTRS)
Watson, Andrew I.; Holle, Ronald L.; Lopez, Raul E.; Nicholson, James R.
1991-01-01
Since 1986, USAF forecasters at NASA-Kennedy have had available a surface wind convergence technique for use during periods of convective development. In Florida during the summer, most of the thunderstorm development is forced by boundary layer processes. The basic premise is that the life cycle of convection is reflected in the surface wind field beneath these storms. Therefore the monitoring of the local surface divergence and/or convergence fields can be used to determine timing, location, longevity, and the lightning hazards which accompany these thunderstorms. This study evaluates four years of monitoring thunderstorm development using surface wind convergence, particularly the average over the area. Cloud-to-ground (CG) lightning is related in time and space with surface convergence for 346 days during the summers of 1987 through 1990 over the expanded wind network at KSC. The relationships are subdivided according to low level wind flow and midlevel moisture patterns. Results show a one in three chance of CG lightning when a convergence event is identified. However, when there is no convergence, the chance of CG lightning is negligible.
ERIC Educational Resources Information Center
Benner, Gregory J.; Uhing, Brad M.; Pierce, Corey D.; Beaudoin, Kathleen M.; Ralston, Nicole C.; Mooney, Paul
2009-01-01
We sought to extend instrument validation research for the Systematic Screening for Behavior Disorders (SSBD) (Walker & Severson, 1990) using convergent validation techniques. Associations between Critical Events, Adaptive Behavior, and Maladaptive Behavior indices of the SSBD were examined in relation to syndrome, broadband, and total scores…
First-order convex feasibility algorithms for x-ray CT
Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan
2013-01-01
Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295
Memoryless cooperative graph search based on the simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Gang-Feng; Fan, Zhen
2011-04-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2015-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.
Research of converter transformer fault diagnosis based on improved PSO-BP algorithm
NASA Astrophysics Data System (ADS)
Long, Qi; Guo, Shuyong; Li, Qing; Sun, Yong; Li, Yi; Fan, Youping
2017-09-01
To overcome those disadvantages that BP (Back Propagation) neural network and conventional Particle Swarm Optimization (PSO) converge at the global best particle repeatedly in early stage and is easy trapped in local optima and with low diagnosis accuracy when being applied in converter transformer fault diagnosis, we come up with the improved PSO-BP neural network to improve the accuracy rate. This algorithm improves the inertia weight Equation by using the attenuation strategy based on concave function to avoid the premature convergence of PSO algorithm and Time-Varying Acceleration Coefficient (TVAC) strategy was adopted to balance the local search and global search ability. At last the simulation results prove that the proposed approach has a better ability in optimizing BP neural network in terms of network output error, global searching performance and diagnosis accuracy.
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2014-01-01
A time-dependent extension of the first-order hyperbolic system method for advection-diffusion problems is introduced. Diffusive/viscous terms are written and discretized as a hyperbolic system, which recovers the original equation in the steady state. The resulting scheme offers advantages over traditional schemes: a dramatic simplification in the discretization, high-order accuracy in the solution gradients, and orders-of-magnitude convergence acceleration. The hyperbolic advection-diffusion system is discretized by the second-order upwind residual-distribution scheme in a unified manner, and the system of implicit-residual-equations is solved by Newton's method over every physical time step. The numerical results are presented for linear and nonlinear advection-diffusion problems, demonstrating solutions and gradients produced to the same order of accuracy, with rapid convergence over each physical time step, typically less than five Newton iterations.
2010-03-01
Laboratory ATTN: RDRL- HRM -DI Aberdeen Proving Ground, MD 21005-5425 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-5132 9. SPONSORING... Automobile drivers can successfully manage lateral movement and appropriate acceleration parameters, and listening to a radio does not appear to...JOHN J KINGMAN RD STE 0944 FT BELVOIR VA 22060-6218 1 ARL FIRES CENTER OF EXCELLENCE FIELD ELEMENT ATTN RDRL HRM AF C HERNANDEZ
Circulation patterns in active lava lakes
NASA Astrophysics Data System (ADS)
Redmond, T. C.; Lev, E.
2014-12-01
Active lava lakes provide a unique window into magmatic conduit processes. We investigated circulation patterns of 4 active lava lakes: Kilauea's Halemaumau crater, Mount Erebus, Erta Ale and Nyiragongo, and in an artificial "lava lake" constructed at the Syracuse University Lava Lab. We employed visual and thermal video recordings collected at these volcanoes and use computer vision techniques to extract time-dependent, two-dimensional surface velocity maps. The large amount of data available from Halemaumau enabled us to identify several characteristic circulation patterns. One such pattern is a rapid acceleration followed by rapid deceleration, often to a level lower than the pre-acceleration level, and then a slow recovery. Another pattern is periodic asymmetric peaks of gradual acceleration and rapid deceleration, or vice versa, previously explained by gas pistoning. Using spectral analysis, we find that the dominant period of circulation cycles at approximately 30 minutes, 3 times longer than the dominant period identified previously for Mount Erebus. Measuring a complete surface velocity field allowed us to map and follow locations of divergence and convergence, therefore upwelling and downwelling, thus connecting the surface flow with that at depth. At Nyiragongo, the location of main upwelling shifts gradually, yet is usually at the interior of the lake, for Erebus it is usually along the perimeter yet often there is catastrophic downwelling at the interior; For Halemaumau upwelling/downwelling position is almost always on the perimeter. In addition to velocity fields, we developed an automated tool for counting crustal plates at the surface of the lava lakes, and found a correlation, and a lag time, between changes if circulation vigor and the average size of crustal plates. Circulation in the artificial basaltic lava "lake" was limited by its size and degree of foaming, yet we measured surface velocities and identify patterns. Maximum surface velocity showed symmetrical peaks of acceleration and deceleration. In summary, extended observations at lava lakes reveal patterns of circulations at different time scales, yielding insight into different processes controlling the exchange of gas and fluids between the magma chamber and conduit, and the surface and atmosphere.
Accelerating free breathing myocardial perfusion MRI using multi coil radial k - t SLR
NASA Astrophysics Data System (ADS)
Goud Lingala, Sajan; DiBella, Edward; Adluru, Ganesh; McGann, Christopher; Jacob, Mathews
2013-10-01
The clinical utility of myocardial perfusion MR imaging (MPI) is often restricted by the inability of current acquisition schemes to simultaneously achieve high spatio-temporal resolution, good volume coverage, and high signal to noise ratio. Moreover, many subjects often find it difficult to hold their breath for sufficiently long durations making it difficult to obtain reliable MPI data. Accelerated acquisition of free breathing MPI data can overcome some of these challenges. Recently, an algorithm termed as k - t SLR has been proposed to accelerate dynamic MRI by exploiting sparsity and low rank properties of dynamic MRI data. The main focus of this paper is to further improve k - t SLR and demonstrate its utility in considerably accelerating free breathing MPI. We extend its previous implementation to account for multi-coil radial MPI acquisitions. We perform k - t sampling experiments to compare different radial trajectories and determine the best sampling pattern. We also introduce a novel augmented Lagrangian framework to considerably improve the algorithm’s convergence rate. The proposed algorithm is validated using free breathing rest and stress radial perfusion data sets from two normal subjects and one patient with ischemia. k - t SLR was observed to provide faithful reconstructions at high acceleration levels with minimal artifacts compared to existing MPI acceleration schemes such as spatio-temporal constrained reconstruction and k - t SPARSE/SENSE.
Accelerating free breathing myocardial perfusion MRI using multi coil radial k-t SLR
Lingala, Sajan Goud; DiBella, Edward; Adluru, Ganesh; McGann, Christopher; Jacob, Mathews
2013-01-01
The clinical utility of myocardial perfusion MR imaging (MPI) is often restricted by the inability of current acquisition schemes to simultaneously achieve high spatio-temporal resolution, good volume coverage, and high signal to noise ratio. Moreover, many subjects often find it difficult to hold their breath for sufficiently long durations making it difficult to obtain reliable MPI data. Accelerated acquisition of free breathing MPI data can overcome some of these challenges. Recently, an algorithm termed as k − t SLR has been proposed to accelerate dynamic MRI by exploiting sparsity and low rank properties of dynamic MRI data. The main focus of this paper is to further improve k − t SLR and demonstrate its utility in considerably accelerating free breathing MPI. We extend its previous implementation to account for multi-coil radial MPI acquisitions. We perform k − t sampling experiments to compare different radial trajectories and determine the best sampling pattern. We also introduce a novel augmented Lagrangian framework to considerably improve the algorithm's convergence rate. The proposed algorithm is validated using free breathing rest and stress radial perfusion data sets from two normal subjects and one patient with ischemia. k − t SLR was observed to provide faithful reconstructions at high acceleration levels with minimal artifacts compared to existing MPI acceleration schemes such as spatio-temporal constrained reconstruction (STCR) and k − t SPARSE/SENSE. PMID:24077063
Pickup ion acceleration in the successive appearance of corotating interaction regions
NASA Astrophysics Data System (ADS)
Tsubouchi, K.
2017-04-01
Acceleration of pickup ions (PUIs) in an environment surrounded by a pair of corotating interaction regions (CIRs) was investigated by numerical simulations using a hybrid code. Energetic particles associated with CIRs have been considered to be a result of the acceleration at their shock boundaries, but recent observations identified the ion flux peaks in the sub-MeV to MeV energy range in the rarefaction region, where two separate CIRs were likely connected by the magnetic field. Our simulation results confirmed these observational features. As the accelerated PUIs repeatedly bounce back and forth along the field lines between the reverse shock of the first CIR and the forward shock of the second one, the energetic population is accumulated in the rarefaction region. It was also verified that PUI acceleration in the dual CIR system had two different stages. First, because PUIs have large gyroradii, multiple shock crossing is possible for several tens of gyroperiods, and there is an energy gain in the component parallel to the magnetic field via shock drift acceleration. Second, as the field rarefaction evolves and the radial magnetic field becomes dominant, Fermi-type reflection takes place at the shock. The converging nature of two shocks results in a net energy gain. The PUI energy acquired through these processes is close to 0.5 MeV, which may be large enough for further acceleration, possibly resulting in the source of anomalous cosmic rays.
Non-linear Multidimensional Optimization for use in Wire Scanner Fitting
NASA Astrophysics Data System (ADS)
Henderson, Alyssa; Terzic, Balsa; Hofler, Alicia; CASA and Accelerator Ops Collaboration
2013-10-01
To ensure experiment efficiency and quality from the Continuous Electron Beam Accelerator at Jefferson Lab, beam energy, size, and position must be measured. Wire scanners are devices inserted into the beamline to produce measurements which are used to obtain beam properties. Extracting physical information from the wire scanner measurements begins by fitting Gaussian curves to the data. This study focuses on optimizing and automating this curve-fitting procedure. We use a hybrid approach combining the efficiency of Newton Conjugate Gradient (NCG) method with the global convergence of three nature-inspired (NI) optimization approaches: genetic algorithm, differential evolution, and particle-swarm. In this Python-implemented approach, augmenting the locally-convergent NCG with one of the globally-convergent methods ensures the quality, robustness, and automation of curve-fitting. After comparing the methods, we establish that given an initial data-derived guess, each finds a solution with the same chi-square- a measurement of the agreement of the fit to the data. NCG is the fastest method, so it is the first to attempt data-fitting. The curve-fitting procedure escalates to one of the globally-convergent NI methods only if NCG fails, thereby ensuring a successful fit. This method allows for the most optimal signal fit and can be easily applied to similar problems. Financial support from DoE, NSF, ODU, DoD, and Jefferson Lab.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
The LeRC rail accelerators: Test designs and diagnostic techniques
NASA Technical Reports Server (NTRS)
Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.
1983-01-01
The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed.
Zhu, Dianwen; Li, Changqing
2014-12-01
Fluorescence molecular tomography (FMT) is a promising imaging modality and has been actively studied in the past two decades since it can locate the specific tumor position three-dimensionally in small animals. However, it remains a challenging task to obtain fast, robust and accurate reconstruction of fluorescent probe distribution in small animals due to the large computational burden, the noisy measurement and the ill-posed nature of the inverse problem. In this paper we propose a nonuniform preconditioning method in combination with L (1) regularization and ordered subsets technique (NUMOS) to take care of the different updating needs at different pixels, to enhance sparsity and suppress noise, and to further boost convergence of approximate solutions for fluorescence molecular tomography. Using both simulated data and phantom experiment, we found that the proposed nonuniform updating method outperforms its popular uniform counterpart by obtaining a more localized, less noisy, more accurate image. The computational cost was greatly reduced as well. The ordered subset (OS) technique provided additional 5 times and 3 times speed enhancements for simulation and phantom experiments, respectively, without degrading image qualities. When compared with the popular L (1) algorithms such as iterative soft-thresholding algorithm (ISTA) and Fast iterative soft-thresholding algorithm (FISTA) algorithms, NUMOS also outperforms them by obtaining a better image in much shorter period of time.
NASA Astrophysics Data System (ADS)
Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.
2017-07-01
Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.
Grignolo, Alberto; Mingping, Zhang
2018-01-01
Sweeping reforms in the largest markets of the Asia-Pacific region are transforming the regulatory and commercial landscape for foreign pharmaceutical companies. Japan, South Korea, and China are leading the charge, establishing mechanisms and infrastructure that both reflect and help drive international regulatory convergence and accelerate delivery of needed, innovative products to patients. In this rapidly evolving regulatory and commercial environment, drug developers can benefit from reforms and proliferating accelerated pathway (AP) frameworks, but only with regulatory and evidence-generation strategies tailored to the region. Otherwise, they will confront significant pricing and reimbursement headwinds. Although APAC economies are at different stages of development, they share a common imperative: to balance pharmaceutical innovation with affordability. Despite the complexity of meeting these sometimes conflicting demands, companies that focus on demonstrating and delivering value for money, and that price new treatments reasonably and sustainably, can succeed both for their shareholders and the region's patient population.
Development of high intensity linear accelerator for heavy ion inertial fusion driver
NASA Astrophysics Data System (ADS)
Lu, Liang; Hattori, Toshiyuki; Hayashizaki, Noriyosu; Ishibashi, Takuya; Okamura, Masahiro; Kashiwagi, Hirotsugu; Takeuchi, Takeshi; Zhao, Hongwei; He, Yuan
2013-11-01
In order to verify the direct plasma injection scheme (DPIS), an acceleration test was carried out in 2001 using a radio frequency quadrupole (RFQ) heavy ion linear accelerator (linac) and a CO2-laser ion source (LIS) (Okamura et al., 2002) [1]. The accelerated carbon beam was observed successfully and the obtained current was 9.22 mA for C4+. To confirm the capability of the DPIS, we succeeded in accelerating 60 mA carbon ions with the DPIS in 2004 (Okamura et al., 2004; Kashiwagi and Hattori, 2004) [2,3]. We have studied a multi-beam type RFQ with an interdigital-H (IH) cavity that has a power-efficient structure in the low energy region. We designed and manufactured a two-beam type RFQ linac as a prototype for the multi-beam type linac; the beam acceleration test of carbon beams showed that it successfully accelerated from 5 keV/u up to 60 keV/u with an output current of 108 mA (2×54 mA/channel) (Ishibashi et al., 2011) [4]. We believe that the acceleration techniques of DPIS and the multi-beam type IH-RFQ linac are technical breakthroughs for heavy-ion inertial confinement fusion (HIF). The conceptual design of the RF linac with these techniques for HIF is studied. New accelerator-systems using these techniques for the HIF basic experiment are being designed to accelerate 400 mA carbon ions using four-beam type IH-RFQ linacs with DPIS. A model with a four-beam acceleration cavity was designed and manufactured to establish the proof of principle (PoP) of the accelerator.
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1988-01-01
An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kunisch, K.
1982-01-01
Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.
The effect of incipient presbyopia on the correspondence between accommodation and vergence.
Baker, Fiona J; Gilmartin, Bernard
2002-06-01
To investigate the accommodation-convergence relationship during the incipient phase of presbyopia. The study aimed to differentiate between the current theories of presbyopia and to explore the mechanisms by which the oculomotor system compensates for the change in the accommodation-convergence relationship contingent on a declining amplitude of accommodation. Using a Canon R-1 open-view autorefractor and a haploscope device, measurements were made of the stimulus and response accommodative convergence/accommodation ratios and the convergence accommodation/convergence ratio of 28 subjects aged 35-45 years at the commencement of the study. Amplitude of accommodation was assessed using a push-down technique. The measurements were repeated at 4-monthly intervals over a 2-year period. The results showed that with the decline in the amplitude of accommodation there is an increase in the accommodative convergence response per unit of accommodative response and a decrease in the convergence accommodation response per unit of convergence. The results of this study fail to support the Hess-Gullstrand theory of presbyopia in that the ciliary muscle effort required to produce a unit change in accommodation increases, rather than stays constant, with age. Data show that the near vision response is limited to the maximum vergence response that can be tolerated and, despite being within the amplitude of accommodation, a stimulus may still appear blurred because the vergence component determines the proportion of available accommodation utilised during near vision.
NASA Astrophysics Data System (ADS)
Denschlag, Robert; Lingenheil, Martin; Tavan, Paul
2008-06-01
Replica exchange (RE) molecular dynamics (MD) simulations are frequently applied to sample the folding-unfolding equilibria of β-hairpin peptides in solution, because efficiency gains are expected from this technique. Using a three-state Markov model featuring key aspects of β-hairpin folding we show that RE simulations can be less efficient than conventional techniques. Furthermore we demonstrate that one is easily seduced to erroneously assign convergence to the RE sampling, because RE ensembles can rapidly reach long-lived stationary states. We conclude that typical REMD simulations covering a few tens of nanoseconds are by far too short for sufficient sampling of β-hairpin folding-unfolding equilibria.
NASA Technical Reports Server (NTRS)
Mielke, Steven L.; Truhlar, Donald G.; Schwenke, David W.
1991-01-01
Improved techniques and well-optimized basis sets are presented for application of the outgoing wave variational principle to calculate converged quantum mechanical reaction probabilities. They are illustrated with calculations for the reactions D + H2 yields HD + H with total angular momentum J = 3 and F + H2 yields HF + H with J = 0 and 3. The optimization involves the choice of distortion potential, the grid for calculating half-integrated Green's functions, the placement, width, and number of primitive distributed Gaussians, and the computationally most efficient partition between dynamically adapted and primitive basis functions. Benchmark calculations with 224-1064 channels are presented.
NASA Astrophysics Data System (ADS)
Takeda, Kotaro; Honda, Kentaro; Takeya, Tsutomu; Okazaki, Kota; Hiraki, Tatsurou; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Fukuda, Hiroshi; Usui, Mitsuo; Nosaka, Hideyuki; Yamamoto, Tsuyoshi; Yamada, Koji
2015-01-01
We developed a design technique for a photonics-electronics convergence system by using an equivalent circuit of optical devices in an electrical circuit simulator. We used the transfer matrix method to calculate the response of an optical device. This method used physical parameters and dimensions of optical devices as calculation parameters to design a device in the electrical circuit simulator. It also used an intermediate frequency to express the wavelength dependence of optical devices. By using both techniques, we simulated bit error rates and eye diagrams of optical and electrical integrated circuits and calculated influences of device structure change and wavelength shift penalty.
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
Design and optimization of resistance wire electric heater for hypersonic wind tunnel
NASA Astrophysics Data System (ADS)
Rehman, Khurram; Malik, Afzaal M.; Khan, I. J.; Hassan, Jehangir
2012-06-01
The range of flow velocities of high speed wind tunnels varies from Mach 1.0 to hypersonic order. In order to achieve such high speed flows, a high expansion nozzle is employed in the converging-diverging section of wind tunnel nozzle. The air for flow is compressed and stored in pressure vessels at temperatures close to ambient conditions. The stored air is dried and has minimum amount of moisture level. However, when this air is expanded rapidly, its temperature drops significantly and liquefaction conditions can be encountered. Air at near room temperature will liquefy due to expansion cooling at a flow velocity of more than Mach 4.0 in a wind tunnel test section. Such liquefaction may not only be hazardous to the model under test and wind tunnel structure; it may also affect the test results. In order to avoid liquefaction of air, a pre-heater is employed in between the pressure vessel and the converging-diverging section of a wind tunnel. A number of techniques are being used for heating the flow in high speed wind tunnels. Some of these include the electric arc heating, pebble bed electric heating, pebble bed natural gas fired heater, hydrogen burner heater, and the laser heater mechanisms. The most common are the pebble bed storage type heaters, which are inefficient, contaminating and time consuming. A well designed electrically heating system can be efficient, clean and simple in operation, for accelerating the wind tunnel flow up to Mach 10. This paper presents CFD analysis of electric preheater for different configurations to optimize its design. This analysis has been done using ANSYS 12.1 FLUENT package while geometry and meshing was done in GAMBIT.
Cocreating business's new social compact.
Brugmann, Jeb; Prahalad, C K
2007-02-01
Moving beyond decades of mutual distrust and animosity, corporations and nongovernmental organizations (NGOs) are learning to cooperate with each other. Realizing that their interests are converging, the two sides are working together to create innovative business models that are helping to grow new markets and accelerate the eradication of poverty. The path to convergence has proceeded in three stages. In the initial be-responsible stage, companies and NGOs, realizing that they had to coexist, started to look for ways to influence each other through joint social responsibility projects. This experience paved the way for the get-into-business stage, in which NGOs and companies sought to serve the poor by setting up successful businesses. In the process, NGOs learned business discipline from the private sector, while corporations gained an appreciation for the local knowledge, low-cost business models, and community-based marketing techniques that the NGOs have mastered. Increased success on both sides has laid the foundation for the cocreate-business stage, in which companies and NGOs become key parts of each other's capacity to deliver value. When BP sought to market a duel-fuel portable stove in India, it set up one such cocreation system with three Indian NGOs. The system allowed BP to bring the innovative stove to a geographically dispersed market through myriad local distributors without incurring distribution costs so high that the product would become unaffordable. The company sold its stoves profitably, the NGOs gained access to a lucrative revenue stream that could fund other projects, and consumers got more than the ability to sit down to a hot meal-they got the opportunity to earn incomes as the local distributors and thus to gain economic and social influence.
Gain and movement time of convergence-accommodation in preschool children.
Suryakumar, R; Bobier, W R
2004-11-01
Convergence-accommodation is the synkinetic change in accommodation driven by vergence. A few studies have investigated the static and dynamic properties of this cross-link in adults but little is known about convergence-accommodation in children. The purpose of this study was to develop a technique for measuring convergence-accommodation and to study its dynamics (gain and movement time) in a sample of pre-school children. Convergence-accommodation measures were examined on thiry-seven normal pre-school children (mean age = 4.0 +/- 1.31 yrs). Stimulus CA/C (sCA/C) ratios and movement time measures of convergence-accommodation were assessed using a photorefractor while subjects viewed a DOG target. Repeated measures were obtained on eight normal adults (mean age = 23 +/- 0.2 yrs). The mean sCA/C ratios and movement times were not significantly different between adults and children (0.10 D/Delta [0.61 D/M.A.], 743 +/- 70 ms and 0.11 D/Delta [0.50 D/M.A.], 787 +/- 216 ms). Repeated measures on adults showed a non-significant mean difference of 0.001 D/Delta. The results suggest that the possible differences in crystalline lens (plant) characteristics between children and adults do not appear to influence convergence-accommodation gain or duration.
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
The Even-Rho and Even-Epsilon Algorithms for Accelerating Convergence of a Numerical Sequence
1981-12-01
equal, leading to zero or very small divisors. Computer programs implementing these algorithms are given along with sample output. An appreciable amount...calculation of the array of Shank’s transforms or, -A equivalently, of the related Padd Table. The :other, the even-rho algorithm, is closely related...leading to zero or very small divisors. Computer pro- grams implementing these algorithms are given along with sample output. An appreciable amount or
Analysis of L-band Multi-Channel Sea Clutter
2010-08-01
Some researchers found that the use of a hybrid algorithm of PS and GA could accelerate the convergence for array beamforming designs (Yeo and Lu...to be shown is array failure correction using the PS algorithm . Assume element 5 of a 32 half-wavelength spacing linear array is in failure. The goal... algorithm . The blue one is the 20 dB Chebyshev pattern and the template in red is the goal pattern to achieve. Two corrected beam patterns are
Advanced Computational Methods for Study of Electromagnetic Compatibility
2011-03-31
following result establishes the super-algebraic convergence of Gper ,Lk to Gperk : Theorem 2.1 (Bruno, Shipman, Turc, Venakides) If k is not a Wood...Gperk (x,x ′)− Gper ,Lk (x,x ′)| ≤ CL 1 2 −p. Figure 7 demonstrates the excellent accuracies arising from use of Theorem 2.1. Separable variables...representations of non-adjacent interactions. In order to further accelerate the evaluation of Gper ,Lk , we derive Taylor series expansions of quantities Gk
A New Zenith Tropospheric Delay Grid Product for Real-Time PPP Applications over China.
Lou, Yidong; Huang, Jinfang; Zhang, Weixing; Liang, Hong; Zheng, Fu; Liu, Jingnan
2017-12-27
Tropospheric delay is one of the major factors affecting the accuracy of electromagnetic distance measurements. To provide wide-area real-time high precision zenith tropospheric delay (ZTD), the temporal and spatial variations of ZTD with altitude were analyzed on the bases of the latest meteorological reanalysis product (ERA-Interim) provided by the European Center for Medium-Range Weather Forecasts (ECMWF). An inverse scale height model at given locations taking latitude, longitude and day of year as inputs was then developed and used to convert real-time ZTD at GPS stations in Crustal Movement Observation Network of China (CMONOC) from station height to mean sea level (MSL). The real-time ZTD grid product (RtZTD) over China was then generated with a time interval of 5 min. Compared with ZTD estimated in post-processing mode, the bias and error RMS of ZTD at test GPS stations derived from RtZTD are 0.39 and 1.56 cm, which is significantly more accurate than commonly used empirical models. In addition, simulated real-time kinematic Precise Point Positioning (PPP) tests show that using RtZTD could accelerate the BDS-PPP convergence time by up to 32% and 65% in the horizontal and vertical components (set coordinate error thresholds to 0.4 m), respectively. For GPS-PPP, the convergence time using RtZTD can be accelerated by up to 29% in the vertical component (0.2 m).
Data Analysis and Non-local Parametrization Strategies for Organized Atmospheric Convection
NASA Astrophysics Data System (ADS)
Brenowitz, Noah D.
The intrinsically multiscale nature of moist convective processes in the atmosphere complicates scientific understanding, and, as a result, current coarse-resolution climate models poorly represent convective variability in the tropics. This dissertation addresses this problem by 1) studying new cumulus convective closures in a pair of idealized models for tropical moist convection, and 2) developing innovative strategies for analyzing high-resolution numerical simulations of organized convection. The first two chapters of this dissertation revisit a historical controversy about the use of convective closures based on the large-scale wind field or moisture convergence. In the first chapter, a simple coarse resolution stochastic model for convective inhibition is designed which includes the non-local effects of wind-convergence on convective activity. This model is designed to replicate the convective dynamics of a typical coarse-resolution climate prediction model. The non-local convergence coupling is motivated by the phenomena of gregarious convection, whereby mesoscale convective systems emit gravity waves which can promote convection at a distant locations. Linearized analysis and nonlinear simulations show that this convergence coupling allows for increased interaction between cumulus convection and the large-scale circulation, but does not suffer from the deleterious behavior of traditional moisture-convergence closures. In the second chapter, the non-local convergence coupling idea is extended to an idealized stochastic multicloud model. This model allows for stochastic transitions between three distinct cloud types, and non-local convergence coupling is most beneficial when applied to the transition from shallow to deep convection. This is consistent with recent observational and numerical modeling evidence, and there is a growing body of work highlighting the importance of this transition in tropical meteorology. In a series of idealized Walker cell simulations, convergence coupling enhances the persistence of Kelvin wave analogs in dry regions of the domain while leaving the dynamics in moist regions largely unaltered. The final chapter of this dissertation presents a technique for analyzing the variability of a direct numerical simulation of Rayleigh-Benard convection at large aspect ratio, which is a basic prototype of convective organization. High resolution numerical models are an invaluable tool for studying atmospheric dynamics, but modern data analysis techniques struggle with the extreme size of the model outputs and the trivial symmetries of the underlying dynamical systems (e.g. shift-invariance). A new data analysis approach which is invariant to spatial symmetries is derived by combining a quasi-Lagrangian description of the data, time-lagged embedding, and manifold learning techniques. The quasi-Lagrangian description is obtained by a straightforward isothermal binning procedure, which compresses the data in a dynamically-aware fashion. A small number of orthogonal modes returned by this algorithm are able to explain the highly intermittent dynamics of the bulk heat transfer, as quantified by the Nusselt Number.
Convergent Evolution of Ribonuclease H in LTR Retrotransposons and Retroviruses
Ustyantsev, Kirill; Novikova, Olga; Blinov, Alexander; Smyshlyaev, Georgy
2015-01-01
Ty3/Gypsy long terminals repeat (LTR) retrotransposons are structurally and phylogenetically close to retroviruses. Two notable structural differences between these groups of genetic elements are 1) the presence in retroviruses of an additional envelope gene, env, which mediates infection, and 2) a specific dual ribonuclease H (RNH) domain encoded by the retroviral pol gene. However, similar to retroviruses, many Ty3/Gypsy LTR retrotransposons harbor additional env-like genes, promoting concepts of the infective mode of these retrotransposons. Here, we provide a further line of evidence of similarity between retroviruses and some Ty3/Gypsy LTR retrotransposons. We identify that, together with their additional genes, plant Ty3/Gypsy LTR retrotransposons of the Tat group have a second RNH, as do retroviruses. Most importantly, we show that the resulting dual RNHs of Tat LTR retrotransposons and retroviruses emerged independently, providing strong evidence for their convergent evolution. The convergent resemblance of Tat LTR retrotransposons and retroviruses may indicate similar selection pressures acting on these diverse groups of elements and reveal potential evolutionary constraints on their structure. We speculate that dual RNH is required to accelerate retrotransposon evolution through increased rates of strand transfer events and subsequent recombination events. PMID:25605791
Hernandez, Wilmar; de Vicente, Jesús; Sergiyenko, Oleg Y.; Fernández, Eduardo
2010-01-01
In this paper, the fast least-mean-squares (LMS) algorithm was used to both eliminate noise corrupting the important information coming from a piezoresisitive accelerometer for automotive applications, and improve the convergence rate of the filtering process based on the conventional LMS algorithm. The response of the accelerometer under test was corrupted by process and measurement noise, and the signal processing stage was carried out by using both conventional filtering, which was already shown in a previous paper, and optimal adaptive filtering. The adaptive filtering process relied on the LMS adaptive filtering family, which has shown to have very good convergence and robustness properties, and here a comparative analysis between the results of the application of the conventional LMS algorithm and the fast LMS algorithm to solve a real-life filtering problem was carried out. In short, in this paper the piezoresistive accelerometer was tested for a multi-frequency acceleration excitation. Due to the kind of test conducted in this paper, the use of conventional filtering was discarded and the choice of one adaptive filter over the other was based on the signal-to-noise ratio improvement and the convergence rate. PMID:22315579
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Pelet, S; Previte, M J R; Laiho, L H; So, P T C
2004-10-01
Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society
Luque, Niceto R.; Garrido, Jesús A.; Carrillo, Richard R.; D'Angelo, Egidio; Ros, Eduardo
2014-01-01
The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network. PMID:25177290
Implementation guidance for accelerated bridge construction in South Dakota
DOT National Transportation Integrated Search
2017-09-01
A study was conducted to investigate implementation of accelerated bridge construction (ABC) in South Dakota. Accelerated bridge construction is defined as construction practices that employ innovative techniques to reduce on-site construction time a...
Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou
2018-02-08
The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.
Efficient relaxed-Jacobi smoothers for multigrid on parallel computers
NASA Astrophysics Data System (ADS)
Yang, Xiang; Mittal, Rajat
2017-03-01
In this Technical Note, we present a family of Jacobi-based multigrid smoothers suitable for the solution of discretized elliptic equations. These smoothers are based on the idea of scheduled-relaxation Jacobi proposed recently by Yang & Mittal (2014) [18] and employ two or three successive relaxed Jacobi iterations with relaxation factors derived so as to maximize the smoothing property of these iterations. The performance of these new smoothers measured in terms of convergence acceleration and computational workload, is assessed for multi-domain implementations typical of parallelized solvers, and compared to the lexicographic point Gauss-Seidel smoother. The tests include the geometric multigrid method on structured grids as well as the algebraic grid method on unstructured grids. The tests demonstrate that unlike Gauss-Seidel, the convergence of these Jacobi-based smoothers is unaffected by domain decomposition, and furthermore, they outperform the lexicographic Gauss-Seidel by factors that increase with domain partition count.
Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.