Anderson Acceleration for Fixed-Point Iterations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Homer F.
The purpose of this grant was to support research on acceleration methods for fixed-point iterations, with applications to computational frameworks and simulation problems that are of interest to DOE.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Tie
2017-11-01
In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.
Li, Xia; Guo, Meifang; Su, Yongfu
2016-01-01
In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .
NASA Astrophysics Data System (ADS)
Storti, Mario A.; Nigro, Norberto M.; Paz, Rodrigo R.; Dalcín, Lisandro D.
2009-03-01
In this paper some results on the convergence of the Gauss-Seidel iteration when solving fluid/structure interaction problems with strong coupling via fixed point iteration are presented. The flow-induced vibration of a flat plate aligned with the flow direction at supersonic Mach number is studied. The precision of different predictor schemes and the influence of the partitioned strong coupling on stability is discussed.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter
2018-03-01
The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.
NASA Astrophysics Data System (ADS)
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-01
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-28
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Fixed point theorems and dissipative processes
NASA Technical Reports Server (NTRS)
Hale, J. K.; Lopes, O.
1972-01-01
The deficiencies of the theories that characterize the maximal compact invariant set of T as asymptotically stable, and that some iterate of T has a fixed point are discussed. It is shown that this fixed point condition is always satisfied for condensing and local dissipative T. Applications are given to a class of neutral functional differential equations.
NASA Astrophysics Data System (ADS)
Gilchrist, S. A.; Braun, D. C.; Barnes, G.
2016-12-01
Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Precise Point Positioning with Partial Ambiguity Fixing
Li, Pan; Zhang, Xiaohong
2015-01-01
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196
Augmenting the one-shot framework by additional constraints
Bosse, Torsten
2016-05-12
The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.
Augmenting the one-shot framework by additional constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosse, Torsten
The (multistep) one-shot method for design optimization problems has been successfully implemented for various applications. To this end, a slowly convergent primal fixed-point iteration of the state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. In this paper we present a modification of the method that allows for additional equality constraints besides the usual state equation. Finally, a retardation analysis and the local convergence of the method in terms of necessary and sufficient conditions are given, which depend on key characteristics of the underlying problem and the quality of the utilized preconditioner.
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Computerized Business Calculus Using Calculators, Examples from Mathematics to Finance.
ERIC Educational Resources Information Center
Vest, Floyd
1991-01-01
After discussing the role of supercalculators within the business calculus curriculum, several examples are presented which allow the reader to examine the capabilities and codes of calculators specific to different major manufacturers. The topics examined include annuities, Newton's method, fixed point iteration, graphing, solvers, and…
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Nikazad, T; Davidi, R; Herman, G. T.
2013-01-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data. PMID:23440911
Nikazad, T; Davidi, R; Herman, G T
2012-03-01
We study the convergence of a class of accelerated perturbation-resilient block-iterative projection methods for solving systems of linear equations. We prove convergence to a fixed point of an operator even in the presence of summable perturbations of the iterates, irrespective of the consistency of the linear system. For a consistent system, the limit point is a solution of the system. In the inconsistent case, the symmetric version of our method converges to a weighted least squares solution. Perturbation resilience is utilized to approximate the minimum of a convex functional subject to the equations. A main contribution, as compared to previously published approaches to achieving similar aims, is a more than an order of magnitude speed-up, as demonstrated by applying the methods to problems of image reconstruction from projections. In addition, the accelerated algorithms are illustrated to be better, in a strict sense provided by the method of statistical hypothesis testing, than their unaccelerated versions for the task of detecting small tumors in the brain from X-ray CT projection data.
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Virtual fringe projection system with nonparallel illumination based on iteration
NASA Astrophysics Data System (ADS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-06-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.
Analytic approximation for random muffin-tin alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, R.; Gray, L.J.; Kaplan, T.
1983-03-15
The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less
Aviat, Félix; Levitt, Antoine; Stamm, Benjamin; Maday, Yvon; Ren, Pengyu; Ponder, Jay W; Lagardère, Louis; Piquemal, Jean-Philip
2017-01-10
We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration ("peek"), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations.
2016-01-01
We introduce a new class of methods, denoted as Truncated Conjugate Gradient(TCG), to solve the many-body polarization energy and its associated forces in molecular simulations (i.e. molecular dynamics (MD) and Monte Carlo). The method consists in a fixed number of Conjugate Gradient (CG) iterations. TCG approaches provide a scalable solution to the polarization problem at a user-chosen cost and a corresponding optimal accuracy. The optimality of the CG-method guarantees that the number of the required matrix-vector products are reduced to a minimum compared to other iterative methods. This family of methods is non-empirical, fully adaptive, and provides analytical gradients, avoiding therefore any energy drift in MD as compared to popular iterative solvers. Besides speed, one great advantage of this class of approximate methods is that their accuracy is systematically improvable. Indeed, as the CG-method is a Krylov subspace method, the associated error is monotonically reduced at each iteration. On top of that, two improvements can be proposed at virtually no cost: (i) the use of preconditioners can be employed, which leads to the Truncated Preconditioned Conjugate Gradient (TPCG); (ii) since the residual of the final step of the CG-method is available, one additional Picard fixed point iteration (“peek”), equivalent to one step of Jacobi Over Relaxation (JOR) with relaxation parameter ω, can be made at almost no cost. This method is denoted by TCG-n(ω). Black-box adaptive methods to find good choices of ω are provided and discussed. Results show that TPCG-3(ω) is converged to high accuracy (a few kcal/mol) for various types of systems including proteins and highly charged systems at the fixed cost of four matrix-vector products: three CG iterations plus the initial CG descent direction. Alternatively, T(P)CG-2(ω) provides robust results at a reduced cost (three matrix-vector products) and offers new perspectives for long polarizable MD as a production algorithm. The T(P)CG-1(ω) level provides less accurate solutions for inhomogeneous systems, but its applicability to well-conditioned problems such as water is remarkable, with only two matrix-vector product evaluations. PMID:28068773
Broadband excitation in nuclear magnetic resonance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tycko, Robert
1984-10-01
Theoretical methods for designing sequences of radio frequency (rf) radiation pulses for broadband excitation of spin systems in nuclear magnetic resonance (NMR) are described. The sequences excite spins uniformly over large ranges of resonant frequencies arising from static magnetic field inhomogeneity, chemical shift differences, or spin couplings, or over large ranges of rf field amplitudes. Specific sequences for creating a population inversion or transverse magnetization are derived and demonstrated experimentally in liquid and solid state NMR. One approach to broadband excitation is based on principles of coherent averaging theory. A general formalism for deriving pulse sequences is given, along withmore » computational methods for specific cases. This approach leads to sequences that produce strictly constant transformations of a spin system. The importance of this feature in NMR applications is discussed. A second approach to broadband excitation makes use of iterative schemes, i.e. sets of operations that are applied repetitively to a given initial pulse sequences, generating a series of increasingly complex sequences with increasingly desirable properties. A general mathematical framework for analyzing iterative schemes is developed. An iterative scheme is treated as a function that acts on a space of operators corresponding to the transformations produced by all possible pulse sequences. The fixed points of the function and the stability of the fixed points are shown to determine the essential behavior of the scheme. Iterative schemes for broadband population inversion are treated in detail. Algebraic and numerical methods for performing the mathematical analysis are presented. Two additional topics are treated. The first is the construction of sequences for uniform excitation of double-quantum coherence and for uniform polarization transfer over a range of spin couplings. Double-quantum excitation sequences are demonstrated in a liquid crystal system. The second additional topic is the construction of iterative schemes for narrowband population inversion. The use of sequences that invert spin populations only over a narrow range of rf field amplitudes to spatially localize NMR signals in an rf field gradient is discussed.« less
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
Toth, Alex; Ellis, J. Austin; Evans, Tom; ...
2017-10-26
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Local Improvement Results for Anderson Acceleration with Inaccurate Function Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Ellis, J. Austin; Evans, Tom
Here, we analyze the convergence of Anderson acceleration when the fixed point map is corrupted with errors. We also consider uniformly bounded errors and stochastic errors with infinite tails. We prove local improvement results which describe the performance of the iteration up to the point where the accuracy of the function evaluation causes the iteration to stagnate. We illustrate the results with examples from neutronics.
Eigensolutions of nonviscously damped systems based on the fixed-point iteration
NASA Astrophysics Data System (ADS)
Lázaro, Mario
2018-03-01
In this paper, nonviscous, nonproportional, symmetric vibrating structures are considered. Nonviscously damped systems present dissipative forces depending on the time history of the response via kernel hereditary functions. Solutions of the free motion equation leads to a nonlinear eigenvalue problem involving mass, stiffness and damping matrices, this latter as dependent on frequency. Viscous damping can be considered as a particular case, involving damping forces as function of the instantaneous velocity of the degrees of freedom. In this work, a new numerical procedure to compute eigensolutions is proposed. The method is based on the construction of certain recursive functions which, under a iterative scheme, allow to reach eigenvalues and eigenvectors simultaneously and avoiding computation of eigensensitivities. Eigenvalues can be read then as fixed-points of those functions. A deep analysis of the convergence is carried out, focusing specially on relating the convergence conditions and error-decay rate to the damping model features, such as the nonproportionality and the viscoelasticity. The method is validated using two 6 degrees of freedom numerical examples involving both nonviscous and viscous damping and a continuous system with a local nonviscous damper. The convergence and the sequences behavior are in agreement with the results foreseen by the theory.
NASA Astrophysics Data System (ADS)
Mostafa, Mostafa E.
2005-10-01
The present study shows that reconstructing the reduced stress tensor (RST) from the measurable fault-slip data (FSD) and the immeasurable shear stress magnitudes (SSM) is a typical iteration problem. The result of direct inversion of FSD presented by Angelier [1990. Geophysical Journal International 103, 363-376] is considered as a starting point (zero step iteration) where all SSM are assigned constant value ( λ=√{3}/2). By iteration, the SSM and RST update each other until they converge to fixed values. Angelier [1990. Geophysical Journal International 103, 363-376] designed the function upsilon ( υ) and the two estimators: relative upsilon (RUP) and (ANG) to express the divergence between the measured and calculated shear stresses. Plotting individual faults' RUP at successive iteration steps shows that they tend to zero (simulated data) or to fixed values (real data) at a rate depending on the orientation and homogeneity of the data. FSD of related origin tend to aggregate in clusters. Plots of the estimators ANG versus RUP show that by iteration, labeled data points are disposed in clusters about a straight line. These two new plots form the basis of a technique for separating FSD into homogeneous clusters.
Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...
2016-05-20
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
New matrix bounds and iterative algorithms for the discrete coupled algebraic Riccati equation
NASA Astrophysics Data System (ADS)
Liu, Jianzhou; Wang, Li; Zhang, Juan
2017-11-01
The discrete coupled algebraic Riccati equation (DCARE) has wide applications in control theory and linear system. In general, for the DCARE, one discusses every term of the coupled term, respectively. In this paper, we consider the coupled term as a whole, which is different from the recent results. When applying eigenvalue inequalities to discuss the coupled term, our method has less error. In terms of the properties of special matrices and eigenvalue inequalities, we propose several upper and lower matrix bounds for the solution of DCARE. Further, we discuss the iterative algorithms for the solution of the DCARE. In the fixed point iterative algorithms, the scope of Lipschitz factor is wider than the recent results. Finally, we offer corresponding numerical examples to illustrate the effectiveness of the derived results.
Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings
NASA Astrophysics Data System (ADS)
Hussein Maibed, Zena
2018-05-01
The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.
Models of Intracavity Frequency Doubled Lasers
1990-01-01
Intermittency; Intermittency Theory; Entropies and Dimension with Intermittency; Resonances, Frobenius - Perron Operators and Power Spectra; and Scaling and...to finding a measure is to approximate the Frobenius - Perron operator, whose domain is the set of measures on M (see, e.g., Li, 1976). An invariant...measure of the system is a fixed point of the Frobenius - Perron operator, and an iterative method using this operator can be shown to converge to an
Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions
1977-09-01
Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to
An Iterative Solver in the Presence and Absence of Multiplicity for Nonlinear Equations
Özkum, Gülcan
2013-01-01
We develop a high-order fixed point type method to approximate a multiple root. By using three functional evaluations per full cycle, a new class of fourth-order methods for this purpose is suggested and established. The methods from the class require the knowledge of the multiplicity. We also present a method in the absence of multiplicity for nonlinear equations. In order to attest the efficiency of the obtained methods, we employ numerical comparisons alongside obtaining basins of attraction to compare them in the complex plane according to their convergence speed and chaotic behavior. PMID:24453914
Anderson acceleration and application to the three-temperature energy equations
NASA Astrophysics Data System (ADS)
An, Hengbin; Jia, Xiaowei; Walker, Homer F.
2017-10-01
The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.
2014-06-02
2011). [22] Li, Q., Micchelli, C., Shen, L., and Xu, Y. A proximity algorithm acelerated by Gauss - Seidel iterations for L1/TV denoising models. Inverse...system of equations and their relationship to the solution of Model (2) and present an algorithm with an iterative approach for finding these solutions...Using the fixed-point characterization above, the (k + 1)th iteration of the prox- imity operator algorithm to find the solution of the Dantzig
Some estimation formulae for continuous time-invariant linear systems
NASA Technical Reports Server (NTRS)
Bierman, G. J.; Sidhu, G. S.
1975-01-01
In this brief paper we examine a Riccati equation decomposition due to Reid and Lainiotis and apply the result to the continuous time-invariant linear filtering problem. Exploitation of the time-invariant structure leads to integration-free covariance recursions which are of use in covariance analyses and in filter implementations. A super-linearly convergent iterative solution to the algebraic Riccati equation (ARE) is developed. The resulting algorithm, arranged in a square-root form, is thought to be numerically stable and competitive with other ARE solution methods. Certain covariance relations that are relevant to the fixed-point and fixed-lag smoothing problems are also discussed.
Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng
2013-01-01
Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444
Non-iterative distance constraints enforcement for cloth drapes simulation
NASA Astrophysics Data System (ADS)
Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno
2016-03-01
A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
NASA Astrophysics Data System (ADS)
Li, Husheng; Betz, Sharon M.; Poor, H. Vincent
2007-05-01
This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.
NASA Astrophysics Data System (ADS)
Zeng, Qinglei; Liu, Zhanli; Wang, Tao; Gao, Yue; Zhuang, Zhuo
2018-02-01
In hydraulic fracturing process in shale rock, multiple fractures perpendicular to a horizontal wellbore are usually driven to propagate simultaneously by the pumping operation. In this paper, a numerical method is developed for the propagation of multiple hydraulic fractures (HFs) by fully coupling the deformation and fracturing of solid formation, fluid flow in fractures, fluid partitioning through a horizontal wellbore and perforation entry loss effect. The extended finite element method (XFEM) is adopted to model arbitrary growth of the fractures. Newton's iteration is proposed to solve these fully coupled nonlinear equations, which is more efficient comparing to the widely adopted fixed-point iteration in the literatures and avoids the need to impose fluid pressure boundary condition when solving flow equations. A secant iterative method based on the stress intensity factor (SIF) is proposed to capture different propagation velocities of multiple fractures. The numerical results are compared with theoretical solutions in literatures to verify the accuracy of the method. The simultaneous propagation of multiple HFs is simulated by the newly proposed algorithm. The coupled influences of propagation regime, stress interaction, wellbore pressure loss and perforation entry loss on simultaneous propagation of multiple HFs are investigated.
Time-dependent spectral renormalization method
NASA Astrophysics Data System (ADS)
Cole, Justin T.; Musslimani, Ziad H.
2017-11-01
The spectral renormalization method was introduced by Ablowitz and Musslimani (2005) as an effective way to numerically compute (time-independent) bound states for certain nonlinear boundary value problems. In this paper, we extend those ideas to the time domain and introduce a time-dependent spectral renormalization method as a numerical means to simulate linear and nonlinear evolution equations. The essence of the method is to convert the underlying evolution equation from its partial or ordinary differential form (using Duhamel's principle) into an integral equation. The solution sought is then viewed as a fixed point in both space and time. The resulting integral equation is then numerically solved using a simple renormalized fixed-point iteration method. Convergence is achieved by introducing a time-dependent renormalization factor which is numerically computed from the physical properties of the governing evolution equation. The proposed method has the ability to incorporate physics into the simulations in the form of conservation laws or dissipation rates. This novel scheme is implemented on benchmark evolution equations: the classical nonlinear Schrödinger (NLS), integrable PT symmetric nonlocal NLS and the viscous Burgers' equations, each of which being a prototypical example of a conservative and dissipative dynamical system. Numerical implementation and algorithm performance are also discussed.
Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory
Zhuang, Qian; Di, Zengru; Wu, Jinshan
2014-01-01
Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs. PMID:25157502
Basin boundaries and focal points in a map coming from Bairstow's method.
Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele
1999-06-01
This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.
NASA Astrophysics Data System (ADS)
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
Nash equilibrium in differential games and the construction of the programmed iteration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Averboukh, Yurii V
This work is devoted to the study of nonzero-sum differential games. The set of payoffs in a situation of Nash equilibrium is examined. It is shown that the set of payoffs in a situation of Nash equilibrium coincides with the set of values of consistent functions which are fixed points of the program absorption operator. A condition for functions to be consistent is given in terms of the weak invariance of the graph of the functions under a certain differential inclusion. Bibliography: 18 titles.
Convergence Time towards Periodic Orbits in Discrete Dynamical Systems
San Martín, Jesús; Porter, Mason A.
2014-01-01
We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
Multigrid-based reconstruction algorithm for quantitative photoacoustic tomography
Li, Shengfu; Montcel, Bruno; Yuan, Zhen; Liu, Wanyu; Vray, Didier
2015-01-01
This paper proposes a multigrid inversion framework for quantitative photoacoustic tomography reconstruction. The forward model of optical fluence distribution and the inverse problem are solved at multiple resolutions. A fixed-point iteration scheme is formulated for each resolution and used as a cost function. The simulated and experimental results for quantitative photoacoustic tomography reconstruction show that the proposed multigrid inversion can dramatically reduce the required number of iterations for the optimization process without loss of reliability in the results. PMID:26203371
Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy
2001-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
Reconfigurable Flight Control Designs With Application to the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zhenglu
1999-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
Dynamics of a durable commodity market involving trade at disequilibrium
NASA Astrophysics Data System (ADS)
Panchuk, A.; Puu, T.
2018-05-01
The present work considers a simple model of a durable commodity market involving two agents who trade stocks of two different types. Stock commodities, in contrast to flow commodities, remain on the market from period to period and, consequently, there is neither unique demand function nor unique supply function exists. We also set up exact conditions for trade at disequilibrium, the issue being usually neglected, though a fact of reality. The induced iterative system has infinite number of fixed points and path dependent dynamics. We show that a typical orbit is either attracted to one of the fixed points or eventually sticks at a no-trade point. For the latter the stock distribution always remains the same while the price displays periodic or chaotic oscillations.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-01-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517
Bíró, Oszkár; Koczka, Gergely; Preis, Kurt
2014-05-01
An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Sawant, A; Ruan, D
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less
A Parallel Fast Sweeping Method for the Eikonal Equation
NASA Astrophysics Data System (ADS)
Baker, B.
2017-12-01
Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.
Strong Convergence for a Finite Family of Generalized Asymptotically Nonexpansive Mappings
NASA Astrophysics Data System (ADS)
Ma, Zhi-Hong; Chen, Ru-Dond
The purpose of this paper is to show the convergence theorems for generalized asymptotically nonexpansive mappings and asymptotically nonexpansive mappings in Banach spaces by using a new iteration which is a natural generalization of the implicit iteration. In the meantime, we give the necessary and sufficient conditions of the strong convergence to approximate a common fixed point and modify some flaw in the results of Thakur [11]. As one will see, the results presented in this paper are an extension of the corresponding results [8,11].
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
X-Divertors on ITER - with no hardware changes
NASA Astrophysics Data System (ADS)
Valanju, Prashant; Covele, Brent; Kotschenreuther, Mike; Mahajan, Swadesh; Kessel, Charles
2014-10-01
Using CORSICA, we have discovered that X-Divertor (XD) equilibria are possible on ITER - without any extra PF coils inside the TF coils, and with no changes to ITER's poloidal field (PF) coil set, divertor cassette, strike points, or first wall. Starting from the Standard Divertor (SD), a sequence of XD configurations (with increasing flux expansions at the divertor plate) can be made by reprogramming ITER PF coil currents while keeping them all under their design limits (Lackner and Zohm have shown this to be impossible for Snowflakes). The strike point is held fixed, so no changes in the divertor or pumping hardware will be needed. The main plasma shape is kept very close to the SD case, so no hardware changes to the main chamber will be needed. Time-dependent ITER-XD operational scenarios are being checked using TSC. This opens the possibility that many XDs could be tested and used to assist in high-power operation on ITER. Because of the toroidally segmented ITER divertor plates, strongly detached operation may be critical for making use of the largest XD flux expansion possible. The flux flaring in XDs is expected to increase the stability of detachment, so that H-mode confinement is not affected. Detachment stability is being examined with SOLPS. This work supported by US DOE Grants DE-FG02-04ER54742 and DE-FG02-04ER54754 and by TACC at UT Austin.
Calibration of stereo rigs based on the backward projection process
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui; Zhao, Zixin
2016-08-01
High-accuracy 3D measurement based on binocular vision system is heavily dependent on the accurate calibration of two rigidly-fixed cameras. In most traditional calibration methods, stereo parameters are iteratively optimized through the forward imaging process (FIP). However, the results can only guarantee the minimal 2D pixel errors, but not the minimal 3D reconstruction errors. To address this problem, a simple method to calibrate a stereo rig based on the backward projection process (BPP) is proposed. The position of a spatial point can be determined separately from each camera by planar constraints provided by the planar pattern target. Then combined with pre-defined spatial points, intrinsic and extrinsic parameters of the stereo-rig can be optimized by minimizing the total 3D errors of both left and right cameras. An extensive performance study for the method in the presence of image noise and lens distortions is implemented. Experiments conducted on synthetic and real data demonstrate the accuracy and robustness of the proposed method.
Function representation with circle inversion map systems
NASA Astrophysics Data System (ADS)
Boreland, Bryson; Kunze, Herb
2017-01-01
The fractals literature develops the now well-known concept of local iterated function systems (using affine maps) with grey-level maps (LIFSM) as an approach to function representation in terms of the associated fixed point of the so-called fractal transform. While originally explored as a method to achieve signal (and 2-D image) compression, more recent work has explored various aspects of signal and image processing using this machinery. In this paper, we develop a similar framework for function representation using circle inversion map systems. Given a circle C with centre õ and radius r, inversion with respect to C transforms the point p˜ to the point p˜', such that p˜ and p˜' lie on the same radial half-line from õ and d(õ, p˜)d(õ, p˜') = r2, where d is Euclidean distance. We demonstrate the results with an example.
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
1999-01-01
This paper reviews three recent works on the numerical methods to integrate ordinary differential equations (ODE), which are specially designed for parallel, vector, and/or multi-processor-unit(PU) computers. The first is the Picard-Chebyshev method (Fukushima, 1997a). It obtains a global solution of ODE in the form of Chebyshev polynomial of large (> 1000) degree by applying the Picard iteration repeatedly. The iteration converges for smooth problems and/or perturbed dynamics. The method runs around 100-1000 times faster in the vector mode than in the scalar mode of a certain computer with vector processors (Fukushima, 1997b). The second is a parallelization of a symplectic integrator (Saha et al., 1997). It regards the implicit midpoint rules covering thousands of timesteps as large-scale nonlinear equations and solves them by the fixed-point iteration. The method is applicable to Hamiltonian systems and is expected to lead an acceleration factor of around 50 in parallel computers with more than 1000 PUs. The last is a parallelization of the extrapolation method (Ito and Fukushima, 1997). It performs trial integrations in parallel. Also the trial integrations are further accelerated by balancing computational load among PUs by the technique of folding. The method is all-purpose and achieves an acceleration factor of around 3.5 by using several PUs. Finally, we give a perspective on the parallelization of some implicit integrators which require multiple corrections in solving implicit formulas like the implicit Hermitian integrators (Makino and Aarseth, 1992), (Hut et al., 1995) or the implicit symmetric multistep methods (Fukushima, 1998), (Fukushima, 1999).
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Implementation of the SPH Procedure Within the MOOSE Finite Element Framework
NASA Astrophysics Data System (ADS)
Laurier, Alexandre
The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
Dynamic Analysis of a Reaction-Diffusion Rumor Propagation Model
NASA Astrophysics Data System (ADS)
Zhao, Hongyong; Zhu, Linhe
2016-06-01
The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. Rumor propagation in social networks has brought new challenges to network security and social stability. This paper, based on partial differential equations (PDEs), proposes a new SIS rumor propagation model by considering the effect of the communication between the different rumor infected users on rumor propagation. The stabilities of a nonrumor equilibrium point and a rumor-spreading equilibrium point are discussed by linearization technique and the upper and lower solutions method, and the existence of a traveling wave solution is established by the cross-iteration scheme accompanied by the technique of upper and lower solutions and Schauder’s fixed point theorem. Furthermore, we add the time delay to rumor propagation and deduce the conditions of Hopf bifurcation and stability switches for the rumor-spreading equilibrium point by taking the time delay as the bifurcation parameter. Finally, numerical simulations are performed to illustrate the theoretical results.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Higher-order finite-difference formulation of periodic Orbital-free Density Functional Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Swarnava; Suryanarayana, Phanish, E-mail: phanish.suryanarayana@ce.gatech.edu
2016-02-15
We present a real-space formulation and higher-order finite-difference implementation of periodic Orbital-free Density Functional Theory (OF-DFT). Specifically, utilizing a local reformulation of the electrostatic and kernel terms, we develop a generalized framework for performing OF-DFT simulations with different variants of the electronic kinetic energy. In particular, we propose a self-consistent field (SCF) type fixed-point method for calculations involving linear-response kinetic energy functionals. In this framework, evaluation of both the electronic ground-state and forces on the nuclei are amenable to computations that scale linearly with the number of atoms. We develop a parallel implementation of this formulation using the finite-difference discretization.more » We demonstrate that higher-order finite-differences can achieve relatively large convergence rates with respect to mesh-size in both the energies and forces. Additionally, we establish that the fixed-point iteration converges rapidly, and that it can be further accelerated using extrapolation techniques like Anderson's mixing. We validate the accuracy of the results by comparing the energies and forces with plane-wave methods for selected examples, including the vacancy formation energy in Aluminum. Overall, the suitability of the proposed formulation for scalable high performance computing makes it an attractive choice for large-scale OF-DFT calculations consisting of thousands of atoms.« less
Chen, Ying-ping; Chen, Chao-Hong
2010-01-01
An adaptive discretization method, called split-on-demand (SoD), enables estimation of distribution algorithms (EDAs) for discrete variables to solve continuous optimization problems. SoD randomly splits a continuous interval if the number of search points within the interval exceeds a threshold, which is decreased at every iteration. After the split operation, the nonempty intervals are assigned integer codes, and the search points are discretized accordingly. As an example of using SoD with EDAs, the integration of SoD and the extended compact genetic algorithm (ECGA) is presented and numerically examined. In this integration, we adopt a local search mechanism as an optional component of our back end optimization engine. As a result, the proposed framework can be considered as a memetic algorithm, and SoD can potentially be applied to other memetic algorithms. The numerical experiments consist of two parts: (1) a set of benchmark functions on which ECGA with SoD and ECGA with two well-known discretization methods: the fixed-height histogram (FHH) and the fixed-width histogram (FWH) are compared; (2) a real-world application, the economic dispatch problem, on which ECGA with SoD is compared to other methods. The experimental results indicate that SoD is a better discretization method to work with ECGA. Moreover, ECGA with SoD works quite well on the economic dispatch problem and delivers solutions better than the best known results obtained by other methods in existence.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1991-01-01
A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.
Steady state numerical solutions for determining the location of MEMS on projectile
NASA Astrophysics Data System (ADS)
Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.
2018-03-01
This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.
An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Han, Jianqiang; Tang, Huazhong
2007-01-01
This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.
Billings, Seth D.; Boctor, Emad M.; Taylor, Russell H.
2015-01-01
We present a probabilistic registration algorithm that robustly solves the problem of rigid-body alignment between two shapes with high accuracy, by aptly modeling measurement noise in each shape, whether isotropic or anisotropic. For point-cloud shapes, the probabilistic framework additionally enables modeling locally-linear surface regions in the vicinity of each point to further improve registration accuracy. The proposed Iterative Most-Likely Point (IMLP) algorithm is formed as a variant of the popular Iterative Closest Point (ICP) algorithm, which iterates between point-correspondence and point-registration steps. IMLP’s probabilistic framework is used to incorporate a generalized noise model into both the correspondence and the registration phases of the algorithm, hence its name as a most-likely point method rather than a closest-point method. To efficiently compute the most-likely correspondences, we devise a novel search strategy based on a principal direction (PD)-tree search. We also propose a new approach to solve the generalized total-least-squares (GTLS) sub-problem of the registration phase, wherein the point correspondences are registered under a generalized noise model. Our GTLS approach has improved accuracy, efficiency, and stability compared to prior methods presented for this problem and offers a straightforward implementation using standard least squares. We evaluate the performance of IMLP relative to a large number of prior algorithms including ICP, a robust variant on ICP, Generalized ICP (GICP), and Coherent Point Drift (CPD), as well as drawing close comparison with the prior anisotropic registration methods of GTLS-ICP and A-ICP. The performance of IMLP is shown to be superior with respect to these algorithms over a wide range of noise conditions, outliers, and misalignments using both mesh and point-cloud representations of various shapes. PMID:25748700
Iterative inversion of deformation vector fields with feedback control.
Dubey, Abhishek; Iliopoulos, Alexandros-Stavros; Sun, Xiaobai; Yin, Fang-Fang; Ren, Lei
2018-05-14
Often, the inverse deformation vector field (DVF) is needed together with the corresponding forward DVF in four-dimesional (4D) reconstruction and dose calculation, adaptive radiation therapy, and simultaneous deformable registration. This study aims at improving both accuracy and efficiency of iterative algorithms for DVF inversion, and advancing our understanding of divergence and latency conditions. We introduce a framework of fixed-point iteration algorithms with active feedback control for DVF inversion. Based on rigorous convergence analysis, we design control mechanisms for modulating the inverse consistency (IC) residual of the current iterate, to be used as feedback into the next iterate. The control is designed adaptively to the input DVF with the objective to enlarge the convergence area and expedite convergence. Three particular settings of feedback control are introduced: constant value over the domain throughout the iteration; alternating values between iteration steps; and spatially variant values. We also introduce three spectral measures of the displacement Jacobian for characterizing a DVF. These measures reveal the critical role of what we term the nontranslational displacement component (NTDC) of the DVF. We carry out inversion experiments with an analytical DVF pair, and with DVFs associated with thoracic CT images of six patients at end of expiration and end of inspiration. The NTDC-adaptive iterations are shown to attain a larger convergence region at a faster pace compared to previous nonadaptive DVF inversion iteration algorithms. By our numerical experiments, alternating control yields smaller IC residuals and inversion errors than constant control. Spatially variant control renders smaller residuals and errors by at least an order of magnitude, compared to other schemes, in no more than 10 steps. Inversion results also show remarkable quantitative agreement with analysis-based predictions. Our analysis captures properties of DVF data associated with clinical CT images, and provides new understanding of iterative DVF inversion algorithms with a simple residual feedback control. Adaptive control is necessary and highly effective in the presence of nonsmall NTDCs. The adaptive iterations or the spectral measures, or both, may potentially be incorporated into deformable image registration methods. © 2018 American Association of Physicists in Medicine.
Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport
NASA Astrophysics Data System (ADS)
Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.
2017-10-01
UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.
Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics
NASA Astrophysics Data System (ADS)
Rangan, Aaditya V.; Cai, David; Tao, Louis
2007-02-01
Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of integrate-and-fire neuronal networks.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
Symplectic exponential Runge-Kutta methods for solving nonlinear Hamiltonian systems
NASA Astrophysics Data System (ADS)
Mei, Lijie; Wu, Xinyuan
2017-06-01
Symplecticity is also an important property for exponential Runge-Kutta (ERK) methods in the sense of structure preservation once the underlying problem is a Hamiltonian system, though ERK methods provide a good performance of higher accuracy and better efficiency than classical Runge-Kutta (RK) methods in dealing with stiff problems: y‧ (t) = My + f (y). On account of this observation, the main theme of this paper is to derive and analyze the symplectic conditions for ERK methods. Using the fundamental analysis of geometric integrators, we first establish one class of sufficient conditions for symplectic ERK methods. It is shown that these conditions will reduce to the conventional ones when M → 0, and this means that these conditions of symplecticity are extensions of the conventional ones in the literature. Furthermore, we also present a new class of structure-preserving ERK methods possessing the remarkable property of symplecticity. Meanwhile, the revised stiff order conditions are proposed and investigated in detail. Since the symplectic ERK methods are implicit and iterative solutions are required in practice, we also investigate the convergence of the corresponding fixed-point iterative procedure. Finally, the numerical experiments, including a nonlinear Schrödinger equation, a sine-Gordon equation, a nonlinear Klein-Gordon equation, and the well-known Fermi-Pasta-Ulam problem, are implemented in comparison with the corresponding symplectic RK methods and the prominent numerical results definitely coincide with the theories and conclusions made in this paper.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
Newton's method: A link between continuous and discrete solutions of nonlinear problems
NASA Technical Reports Server (NTRS)
Thurston, G. A.
1980-01-01
Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.
Nonparametric estimation and testing of fixed effects panel data models
Henderson, Daniel J.; Carroll, Raymond J.; Li, Qi
2009-01-01
In this paper we consider the problem of estimating nonparametric panel data models with fixed effects. We introduce an iterative nonparametric kernel estimator. We also extend the estimation method to the case of a semiparametric partially linear fixed effects model. To determine whether a parametric, semiparametric or nonparametric model is appropriate, we propose test statistics to test between the three alternatives in practice. We further propose a test statistic for testing the null hypothesis of random effects against fixed effects in a nonparametric panel data regression model. Simulations are used to examine the finite sample performance of the proposed estimators and the test statistics. PMID:19444335
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Constructive methods of invariant manifolds for kinetic problems
NASA Astrophysics Data System (ADS)
Gorban, Alexander N.; Karlin, Iliya V.; Zinovyev, Andrei Yu.
2004-06-01
The concept of the slow invariant manifold is recognized as the central idea underpinning a transition from micro to macro and model reduction in kinetic theories. We present the Constructive Methods of Invariant Manifolds for model reduction in physical and chemical kinetics, developed during last two decades. The physical problem of reduced description is studied in the most general form as a problem of constructing the slow invariant manifold. The invariance conditions are formulated as the differential equation for a manifold immersed in the phase space ( the invariance equation). The equation of motion for immersed manifolds is obtained ( the film extension of the dynamics). Invariant manifolds are fixed points for this equation, and slow invariant manifolds are Lyapunov stable fixed points, thus slowness is presented as stability. A collection of methods to derive analytically and to compute numerically the slow invariant manifolds is presented. Among them, iteration methods based on incomplete linearization, relaxation method and the method of invariant grids are developed. The systematic use of thermodynamics structures and of the quasi-chemical representation allow to construct approximations which are in concordance with physical restrictions. The following examples of applications are presented: nonperturbative deviation of physically consistent hydrodynamics from the Boltzmann equation and from the reversible dynamics, for Knudsen numbers Kn∼1; construction of the moment equations for nonequilibrium media and their dynamical correction (instead of extension of list of variables) to gain more accuracy in description of highly nonequilibrium flows; determination of molecules dimension (as diameters of equivalent hard spheres) from experimental viscosity data; model reduction in chemical kinetics; derivation and numerical implementation of constitutive equations for polymeric fluids; the limits of macroscopic description for polymer molecules, etc.
ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method
NASA Technical Reports Server (NTRS)
Inampudi, Ravi
2016-01-01
This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Cyclic Game Dynamics Driven by Iterated Reasoning
Frey, Seth; Goldstone, Robert L.
2013-01-01
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a “hopping” behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. PMID:23441191
Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780
NASA Astrophysics Data System (ADS)
Gómez-Aguilar, J. F.
2018-03-01
In this paper, we analyze an alcoholism model which involves the impact of Twitter via Liouville-Caputo and Atangana-Baleanu-Caputo fractional derivatives with constant- and variable-order. Two fractional mathematical models are considered, with and without delay. Special solutions using an iterative scheme via Laplace and Sumudu transform were obtained. We studied the uniqueness and existence of the solutions employing the fixed point postulate. The generalized model with variable-order was solved numerically via the Adams method and the Adams-Bashforth-Moulton scheme. Stability and convergence of the numerical solutions were presented in details. Numerical examples of the approximate solutions are provided to show that the numerical methods are computationally efficient. Therefore, by including both the fractional derivatives and finite time delays in the alcoholism model studied, we believe that we have established a more complete and more realistic indicator of alcoholism model and affect the spread of the drinking.
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
Convergence of Newton's method for a single real equation
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1985-01-01
Newton's method for finding the zeroes of a single real function is investigated in some detail. Convergence is generally checked using the Contraction Mapping Theorem which yields sufficient but not necessary conditions for convergence of the general single point iteration method. The resulting convergence intervals are frequently considerably smaller than actual convergence zones. For a specific single point iteration method, such as Newton's method, better estimates of regions of convergence should be possible. A technique is described which, under certain conditions (frequently satisfied by well behaved functions) gives much larger zones where convergence is guaranteed.
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Designing gradient coils with reduced hot spot temperatures.
While, Peter T; Forbes, Larry K; Crozier, Stuart
2010-03-01
Gradient coil temperature is an important concern in the design and construction of MRI scanners. Closely spaced gradient coil windings cause temperature hot spots within the system as a result of Ohmic heating associated with large current being driven through resistive material, and can strongly affect the performance of the coils. In this paper, a model is presented for predicting the spatial temperature distribution of a gradient coil, including the location and extent of temperature hot spots. Subsequently, a method is described for designing gradient coils with improved temperature distributions and reduced hot spot temperatures. Maximum temperature represents a non-linear constraint and a relaxed fixed point iteration routine is proposed to adjust coil windings iteratively to minimise this coil feature. Several examples are considered that assume different thermal material properties and cooling mechanisms for the gradient system. Coil winding solutions are obtained for all cases considered that display a considerable drop in hot spot temperature (>20%) when compared to standard minimum power gradient coils with equivalent gradient homogeneity, efficiency and inductance. The method is semi-analytical in nature and can be adapted easily to consider other non-linear constraints in the design of gradient coils or similar systems. Crown Copyright (c) 2009. Published by Elsevier Inc. All rights reserved.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Radiation pattern synthesis of planar antennas using the iterative sampling method
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Coffey, E. L.
1975-01-01
A synthesis method is presented for determining an excitation of an arbitrary (but fixed) planar source configuration. The desired radiation pattern is specified over all or part of the visible region. It may have multiple and/or shaped main beams with low sidelobes. The iterative sampling method is used to find an excitation of the source which yields a radiation pattern that approximates the desired pattern to within a specified tolerance. In this paper the method is used to calculate excitations for line sources, linear arrays (equally and unequally spaced), rectangular apertures, rectangular arrays (arbitrary spacing grid), and circular apertures. Examples using these sources to form patterns with shaped main beams, multiple main beams, shaped sidelobe levels, and combinations thereof are given.
NASA Astrophysics Data System (ADS)
Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.
1999-04-01
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Liu, Wanli
2017-01-01
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2017-09-01
Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.
High-order computer-assisted estimates of topological entropy
NASA Astrophysics Data System (ADS)
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
Uniform convergence of multigrid V-cycle iterations for indefinite and nonsymmetric problems
NASA Technical Reports Server (NTRS)
Bramble, James H.; Kwak, Do Y.; Pasciak, Joseph E.
1993-01-01
In this paper, we present an analysis of a multigrid method for nonsymmetric and/or indefinite elliptic problems. In this multigrid method various types of smoothers may be used. One type of smoother which we consider is defined in terms of an associated symmetric problem and includes point and line, Jacobi, and Gauss-Seidel iterations. We also study smoothers based entirely on the original operator. One is based on the normal form, that is, the product of the operator and its transpose. Other smoothers studied include point and line, Jacobi, and Gauss-Seidel. We show that the uniform estimates for symmetric positive definite problems carry over to these algorithms. More precisely, the multigrid iteration for the nonsymmetric and/or indefinite problem is shown to converge at a uniform rate provided that the coarsest grid in the multilevel iteration is sufficiently fine (but not depending on the number of multigrid levels).
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
An iterative method for the localization of a neutron source in a large box (container)
NASA Astrophysics Data System (ADS)
Dubinski, S.; Presler, O.; Alfassi, Z. B.
2007-12-01
The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problemmore » are presented.« less
Nonrigid iterative closest points for registration of 3D biomedical surfaces
NASA Astrophysics Data System (ADS)
Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee
2018-01-01
Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.
Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range
NASA Technical Reports Server (NTRS)
Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Iterative approach as alternative to S-matrix in modal methods
NASA Astrophysics Data System (ADS)
Semenikhin, Igor; Zanuccoli, Mauro
2014-12-01
The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.
An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT
NASA Astrophysics Data System (ADS)
Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.
2009-06-01
A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
NASA Technical Reports Server (NTRS)
Zhang, Jun; Ge, Lixin; Kouatchou, Jules
2000-01-01
A new fourth order compact difference scheme for the three dimensional convection diffusion equation with variable coefficients is presented. The novelty of this new difference scheme is that it Only requires 15 grid points and that it can be decoupled with two colors. The entire computational grid can be updated in two parallel subsweeps with the Gauss-Seidel type iterative method. This is compared with the known 19 point fourth order compact differenCe scheme which requires four colors to decouple the computational grid. Numerical results, with multigrid methods implemented on a shared memory parallel computer, are presented to compare the 15 point and the 19 point fourth order compact schemes.
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
Parallelized implicit propagators for the finite-difference Schrödinger equation
NASA Astrophysics Data System (ADS)
Parker, Jonathan; Taylor, K. T.
1995-08-01
We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.
NASA Astrophysics Data System (ADS)
Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond
1986-11-01
A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.
A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection
NASA Technical Reports Server (NTRS)
Samtaney, Ravi
1999-01-01
An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.
A new solution procedure for a nonlinear infinite beam equation of motion
NASA Astrophysics Data System (ADS)
Jang, T. S.
2016-10-01
Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.
On a new iterative method for solving linear systems and comparison results
NASA Astrophysics Data System (ADS)
Jing, Yan-Fei; Huang, Ting-Zhu
2008-10-01
In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
Udzawa-type iterative method with parareal preconditioner for a parabolic optimal control problem
NASA Astrophysics Data System (ADS)
Lapin, A.; Romanenko, A.
2016-11-01
The article deals with the optimal control problem with the parabolic equation as state problem. There are point-wise constraints on the state and control functions. The objective functional involves the observation given in the domain at each moment. The conditions for convergence Udzawa's type iterative method are given. The parareal method to inverse preconditioner is given. The results of calculations are presented.
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
NASA Astrophysics Data System (ADS)
Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.
2014-06-01
The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.
Improved evaluation of optical depth components from Langley plot data
NASA Technical Reports Server (NTRS)
Biggar, S. F.; Gellman, D. I.; Slater, P. N.
1990-01-01
A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-02-12
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Effect of Impurities on the Freezing Point of Zinc
NASA Astrophysics Data System (ADS)
Sun, Jianping; Rudtsch, Steffen; Niu, Yalu; Zhang, Lin; Wang, Wei; Den, Xiaolong
2017-03-01
The knowledge of the liquidus slope of impurities in fixed-point metal defined by the International Temperature Scale of 1990 is important for the estimation of uncertainties and correction of fixed point with the sum of individual estimates method. Great attentions are paid to the effect of ultra-trace impurities on the freezing point of zinc in the National Institute of Metrology. In the present work, the liquidus slopes of Ga-Zn, Ge-Zn were measured with the slim fixed-point cell developed through the doping experiments, and the temperature characteristics of the phase diagram of Fe-Zn were furthermore investigated. A quasi-adiabatic Zn fixed-point cell was developed with the thermometer well surrounded by the crucible with the pure metal, and the temperature uniformity of less than 20 mK in the region where the metal is located was obtained. The previous doping experiment of Pb-Zn with slim fixed-point cell was checked with quasi-adiabatic Zn fixed-point cell, and the result supports the previous liquidus slope measured with the traditional fixed-point realization.
NASA Technical Reports Server (NTRS)
Walden, H.
1974-01-01
Methods for obtaining approximate solutions for the fundamental eigenvalue of the Laplace-Beltrami operator (also referred to as the membrane eigenvalue problem for the vibration equation) on the unit spherical surface are developed. Two specific types of spherical surface domains are considered: (1) the interior of a spherical triangle, i.e., the region bounded by arcs of three great circles, and (2) the exterior of a great circle arc extending for less than pi radians on the sphere (a spherical surface with a slit). In both cases, zero boundary conditions are imposed. In order to solve the resulting second-order elliptic partial differential equations in two independent variables, a finite difference approximation is derived. The symmetric (generally five-point) finite difference equations that develop are written in matrix form and then solved by the iterative method of point successive overrelaxation. Upon convergence of this iterative method, the fundamental eigenvalue is approximated by iteration utilizing the power method as applied to the finite Rayleigh quotient.
The Simple Map for a Single-null Divertor Tokamak: How to Find the Last Good Surface
NASA Astrophysics Data System (ADS)
Phan, Huong; Ali, Halima; Punjabi, Alkesh
2000-10-01
The Simple Map^1 is a representation of the magnetic field inside a single-null divertor tokamak. It is given by the equations: X_n+1=X_n kYn (1-Y_n), Y_n+1= Y_n+kX_n+1. These equations mimic the motion of the magnetic field lines in a single-null divertor tokamak. The fixed stable point is (0,0) and the unstable fixed oint is (0,1). k is fixed at 0.60. In our work, the starting values of Y in the map is kept in the interval of 0 to 1, and the starting value of X is 0. Using the successive bifurcation method, we first run these equations for 10^6 iterations to find the approximate value of Y when chaos occurs. We examine the neighborhood of this Y value to find the exact value of Y for the last good surface. We call this value Y_lgs. We find Y_lgs to be 0.997135768 for k=0.60 and X=0. This work is supported by US DOE OFES. Ms. Huong Phan is a HU CFRT Summer Fusion High School Workshop Scholar from Andrew P. Hill High School in California. She is supported by NASA SHARP Plus Program. 1. Punjabi A, Verma A and Boozer A, Phys Rev Lett 69 3322 (1992) and J Plasma Phys 52 91 (1994)
Estimation for the Linear Model With Uncertain Covariance Matrices
NASA Astrophysics Data System (ADS)
Zachariah, Dave; Shariati, Nafiseh; Bengtsson, Mats; Jansson, Magnus; Chatterjee, Saikat
2014-03-01
We derive a maximum a posteriori estimator for the linear observation model, where the signal and noise covariance matrices are both uncertain. The uncertainties are treated probabilistically by modeling the covariance matrices with prior inverse-Wishart distributions. The nonconvex problem of jointly estimating the signal of interest and the covariance matrices is tackled by a computationally efficient fixed-point iteration as well as an approximate variational Bayes solution. The statistical performance of estimators is compared numerically to state-of-the-art estimators from the literature and shown to perform favorably.
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
de Lima, Camila; Salomão Helou, Elias
2018-01-01
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure
NASA Astrophysics Data System (ADS)
Bogani, C.; Gasparo, M. G.; Papini, A.
2009-07-01
We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
Block Iterative Methods for Elliptic and Parabolic Difference Equations.
1981-09-01
S V PARTER, M STEUERWALT N0OO14-7A-C-0341 UNCLASSIFIED CSTR -447 NL ENN.EEEEEN LLf SCOMPUTER SCIENCES c~DEPARTMENT SUniversity of Wisconsin- SMadison...suggests that iterative algorithms that solve for several points at once will converge more rapidly than point algorithms . The Gaussian elimination... algorithm is seen in this light to converge in one step. Frankel [14], Young [34], Arms, Gates, and Zondek [1], and Varga [32], using the algebraic structure
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
Markov Chain Monte Carlo from Lagrangian Dynamics.
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2015-04-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
Some Remarks on GMRES for Transport Theory
NASA Technical Reports Server (NTRS)
Patton, Bruce W.; Holloway, James Paul
2003-01-01
We review some work on the application of GMRES to the solution of the discrete ordinates transport equation in one-dimension. We note that GMRES can be applied directly to the angular flux vector, or it can be applied to only a vector of flux moments as needed to compute the scattering operator of the transport equation. In the former case we illustrate both the delights and defects of ILU right-preconditioners for problems with anisotropic scatter and for problems with upscatter. When working with flux moments we note that GMRES can be used as an accelerator for any existing transport code whose solver is based on a stationary fixed-point iteration, including transport sweeps and DSA transport sweeps. We also provide some numerical illustrations of this idea. We finally show how space can be traded for speed by taking multiple transport sweeps per GMRES iteration. Key Words: transport equation, GMRES, Krylov subspace
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
NASA Astrophysics Data System (ADS)
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
DOT National Transportation Integrated Search
1978-04-01
Volume 2 defines a new algorithm for the network equilibrium model that works in the space of path flows and is based on the theory of fixed point method. The goals of the study were broadly defined as the identification of aggregation practices and ...
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.; Merkel, P.; Monticello, D.A.
Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz {ital et al.}, {ital Plasma Physics and Controlled Nuclear Fusion Research 1990} (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman {ital et al.}, Comput. Phys. Commun., {bold 43}, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations neededmore » for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann {ital et al.}, Phys. Fluids {bold 26}, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of {open_quotes}self-healing{close_quotes} of islands has been observed. {copyright} {ital 1999 American Institute of Physics.}« less
Multiview point clouds denoising based on interference elimination
NASA Astrophysics Data System (ADS)
Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu
2018-03-01
Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.
An efficient algorithm for the generalized Foldy-Lax formulation
NASA Astrophysics Data System (ADS)
Huang, Kai; Li, Peijun; Zhao, Hongkai
2013-02-01
Consider the scattering of a time-harmonic plane wave incident on a two-scale heterogeneous medium, which consists of scatterers that are much smaller than the wavelength and extended scatterers that are comparable to the wavelength. In this work we treat those small scatterers as isotropic point scatterers and use a generalized Foldy-Lax formulation to model wave propagation and capture multiple scattering among point scatterers and extended scatterers. Our formulation is given as a coupled system, which combines the original Foldy-Lax formulation for the point scatterers and the regular boundary integral equation for the extended obstacle scatterers. The existence and uniqueness of the solution for the formulation is established in terms of physical parameters such as the scattering coefficient and the separation distances. Computationally, an efficient physically motivated Gauss-Seidel iterative method is proposed to solve the coupled system, where only a linear system of algebraic equations for point scatterers or a boundary integral equation for a single extended obstacle scatterer is required to solve at each step of iteration. The convergence of the iterative method is also characterized in terms of physical parameters. Numerical tests for the far-field patterns of scattered fields arising from uniformly or randomly distributed point scatterers and single or multiple extended obstacle scatterers are presented.
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Automated Parameter Studies Using a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosimis, Michael J.; Nemec, Marian
2004-01-01
Computational Fluid Dynamics (CFD) is now routinely used to analyze isolated points in a design space by performing steady-state computations at fixed flight conditions (Mach number, angle of attack, sideslip), for a fixed geometric configuration of interest. This "point analysis" provides detailed information about the flowfield, which aides an engineer in understanding, or correcting, a design. A point analysis is typically performed using high fidelity methods at a handful of critical design points, e.g. a cruise or landing configuration, or a sample of points along a flight trajectory.
Assessment of Preconditioner for a USM3D Hierarchical Adaptive Nonlinear Method (HANIM) (Invited)
NASA Technical Reports Server (NTRS)
Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.
2016-01-01
Enhancements to the previously reported mixed-element USM3D Hierarchical Adaptive Nonlinear Iteration Method (HANIM) framework have been made to further improve robustness, efficiency, and accuracy of computational fluid dynamic simulations. The key enhancements include a multi-color line-implicit preconditioner, a discretely consistent symmetry boundary condition, and a line-mapping method for the turbulence source term discretization. The USM3D iterative convergence for the turbulent flows is assessed on four configurations. The configurations include a two-dimensional (2D) bump-in-channel, the 2D NACA 0012 airfoil, a three-dimensional (3D) bump-in-channel, and a 3D hemisphere cylinder. The Reynolds Averaged Navier Stokes (RANS) solutions have been obtained using a Spalart-Allmaras turbulence model and families of uniformly refined nested grids. Two types of HANIM solutions using line- and point-implicit preconditioners have been computed. Additional solutions using the point-implicit preconditioner alone (PA) method that broadly represents the baseline solver technology have also been computed. The line-implicit HANIM shows superior iterative convergence in most cases with progressively increasing benefits on finer grids.
Intelligent process mapping through systematic improvement of heuristics
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
NASA Astrophysics Data System (ADS)
Donlon, Kevan; Ninkov, Zoran; Baum, Stefi
2018-07-01
Interpixel capacitance (IPC) is a deterministic electronic coupling that results in a portion of the collected signal incident on one pixel of a hybridized detector array being measured in adjacent pixels. Data collected by light sensitive HgCdTe arrays which exhibit this coupling typically goes uncorrected or is corrected by treating the coupling as a fixed point-spread function. Evidence suggests that this IPC coupling is not uniform across different signal and background levels. This variation invalidates assumptions that are key in decoupling techniques such as Wiener Filtering or application of the Lucy–Richardson algorithm. Additionally, the variable IPC results in the point-spread function (PSF) depending upon a star’s signal level relative to the background level, among other parameters. With an IPC ranging from 0.68% to 1.45% over the full well depth of a sensor, as is a reasonable range for the H2RG arrays, the FWHM of the JWSTs NIRCam 405N band is degraded from 2.080 pix (0.″132) as expected from the diffraction pattern to 2.186 pix (0.″142) when the star is just breaching the sensitivity limit of the system. For example, When attempting to use a fixed PSF fitting (e.g., assuming the PSF observed from a bright star in the field) to untangle two sources with a flux ratio of 4:1 and a center to center distance of 3 pixels, flux estimation can be off by upwards of 1.5% with a separation error of 50 millipixels. To deal with this issue an iterative non-stationary method for deconvolution is here proposed, implemented, and evaluated that can account for the signal dependent nature of IPC.
A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan; Li, Wei; Dai, Honglei
2013-08-01
We presented a multiresolution hierarchical classification (MHC) algorithm for differentiating ground from non-ground LiDAR point cloud based on point residuals from the interpolated raster surface. MHC includes three levels of hierarchy, with the simultaneous increase of cell resolution and residual threshold from the low to the high level of the hierarchy. At each level, the surface is iteratively interpolated towards the ground using thin plate spline (TPS) until no ground points are classified, and the classified ground points are used to update the surface in the next iteration. 15 groups of benchmark dataset, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were used to compare the performance of MHC with those of the 17 other publicized filtering methods. Results indicated that MHC with the average total error and average Cohen’s kappa coefficient of 4.11% and 86.27% performs better than all other filtering methods.
A Scalable Architecture of a Structured LDPC Decoder
NASA Technical Reports Server (NTRS)
Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon
2004-01-01
We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.
Twisted Burnside-Frobenius Theory for Endomorphisms of Polycyclic Groups
NASA Astrophysics Data System (ADS)
Fel'shtyn, A. L.; Troitsky, E. V.
2018-01-01
Let R(ϕ) be the number of ϕ-conjugacy (or Reidemeister) classes of an endomorphism ϕ of a group G. We prove, for several classes of groups (including polycyclic ones), that the number R(ϕ) is equal to the number of fixed points of the induced mapping on an appropriate subspace of the unitary dual space Ĝ, when R(ϕ) < ∞. Applying the result to iterations of ϕ, we obtain Gauss congruences for Reidemeister numbers. In contrast to the case of automorphisms, studied previously, there are plenty examples having the above finiteness condition, even among groups with R ∞ property.
Maximum likelihood clustering with dependent feature trees
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
The decomposition of mixture density of the data into its normal component densities is considered. The densities are approximated with first order dependent feature trees using criteria of mutual information and distance measures. Expressions are presented for the criteria when the densities are Gaussian. By defining different typs of nodes in a general dependent feature tree, maximum likelihood equations are developed for the estimation of parameters using fixed point iterations. The field structure of the data is also taken into account in developing maximum likelihood equations. Experimental results from the processing of remotely sensed multispectral scanner imagery data are included.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
NASA Astrophysics Data System (ADS)
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Adaptive [theta]-methods for pricing American options
NASA Astrophysics Data System (ADS)
Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran
2008-12-01
We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.
NASA Astrophysics Data System (ADS)
Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.
2017-10-01
Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.
Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies
NASA Astrophysics Data System (ADS)
Vishnoi, U.; Noll, T. G.
2012-09-01
The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz-1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.
Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.
Werner, Tomás
2015-07-01
Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.
Construction, classification and parametrization of complex Hadamard matrices
NASA Astrophysics Data System (ADS)
Szöllősi, Ferenc
To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.
NASA Astrophysics Data System (ADS)
Edler, F.; Huang, K.
2016-12-01
Fifteen miniature fixed-point cells made of three different ceramic crucible materials (Al2O3, ZrO2, and Al2O3(86 %)+ZrO2(14 %)) were filled with pure palladium and used to calibrate type B thermocouples (Pt30 %Rh/Pt6 %Rh). A critical point by using miniature fixed points with small amounts of fixed-point material is the analysis of the melting curves, which are characterized by significant slopes during the melting process compared to flat melting plateaus obtainable using conventional fixed-point cells. The method of the extrapolated starting point temperature using straight line approximation of the melting plateau was applied to analyze the melting curves. This method allowed an unambiguous determination of an electromotive force (emf) assignable as melting temperature. The strict consideration of two constraints resulted in a unique, repeatable and objective method to determine the emf at the melting temperature within an uncertainty of about 0.1 μ V. The lifetime and long-term stability of the miniature fixed points was investigated by performing more than 100 melt/freeze cycles for each crucible of the different ceramic materials. No failure of the crucibles occurred indicating an excellent mechanical stability of the investigated miniature cells. The consequent limitation of heating rates to values below {± }3.5 K min^{-1} above 1100° C and the carefully and completely filled crucibles (the liquid palladium occupies the whole volume of the crucible) are the reasons for successfully preventing the crucibles from breaking. The thermal stability of the melting temperature of palladium was excellent when using the crucibles made of Al2O3(86 %)+ZrO2(14 %) and ZrO2. Emf drifts over the total duration of the long-term investigation were below a temperature equivalent of about 0.1 K-0.2 K.
From cat's eyes to disjoint multicellular natural convection flow in tall tilted cavities
NASA Astrophysics Data System (ADS)
Nicolás, Alfredo; Báez, Elsa; Bermúdez, Blanca
2011-07-01
Numerical results of two-dimensional natural convection problems, in air-filled tall cavities, are reported to study the change of the cat's eyes flow as some parameters vary, the aspect ratio A and the angle of inclination ϕ of the cavity, with the Rayleigh number Ra mostly fixed; explicitly, the range of the variation is given by 12⩽A⩽20 and 0°⩽ϕ⩽270°; about Ra=1.1×10. A novelty contribution of this work is the transition from the cat's eyes changes, as A varies, to a disjoint multicellular flow, as ϕ varies. These flows may be modeled by the unsteady Boussinesq approximation in stream function and vorticity variables which is solved with a fixed point iterative process applied to the nonlinear elliptic system that results after time discretization. The validation of the results relies on mesh size and time-step independence studies.
An iterative method for obtaining the optimum lightning location on a spherical surface
NASA Technical Reports Server (NTRS)
Chao, Gao; Qiming, MA
1991-01-01
A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.
NASA Astrophysics Data System (ADS)
Guillemaut, C.; Lennholm, M.; Harrison, J.; Carvalho, I.; Valcarcel, D.; Felton, R.; Griph, S.; Hogben, C.; Lucock, R.; Matthews, G. F.; Perez Von Thun, C.; Pitts, R. A.; Wiesen, S.; contributors, JET
2017-04-01
Burning plasmas with 500 MW of fusion power on ITER will rely on partially detached divertor operation to keep target heat loads at manageable levels. Such divertor regimes will be maintained by a real-time control system using the seeding of radiative impurities like nitrogen (N), neon or argon as actuator and one or more diagnostic signals as sensors. Recently, real-time control of divertor detachment has been successfully achieved in Type I ELMy H-mode JET-ITER-like wall discharges by using saturation current (I sat) measurements from divertor Langmuir probes as feedback signals to control the level of N seeding. The degree of divertor detachment is calculated in real-time by comparing the outer target peak I sat measurements to the peak I sat value at the roll-over in order to control the opening of the N injection valve. Real-time control of detachment has been achieved in both fixed and swept strike point experiments. The system has been progressively improved and can now automatically drive the divertor conditions from attached through high recycling and roll-over down to a user-defined level of detachment. Such a demonstration is a successful proof of principle in the context of future operation on ITER which will be extensively equipped with divertor target probes.
Automatic extraction of the mid-sagittal plane using an ICP variant
NASA Astrophysics Data System (ADS)
Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus
2008-03-01
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.
NASA Astrophysics Data System (ADS)
Ogorodnikov, Yuri; Khachay, Michael; Pljonkin, Anton
2018-04-01
We describe the possibility of employing the special case of the 3-SAT problem stemming from the well known integer factorization problem for the quantum cryptography. It is known, that for every instance of our 3-SAT setting the given 3-CNF is satisfiable by a unique truth assignment, and the goal is to find this assignment. Since the complexity status of the factorization problem is still undefined, development of approximation algorithms and heuristics adopts interest of numerous researchers. One of promising approaches to construction of approximation techniques is based on real-valued relaxation of the given 3-CNF followed by minimizing of the appropriate differentiable loss function, and subsequent rounding of the fractional minimizer obtained. Actually, algorithms developed this way differ by the rounding scheme applied on their final stage. We propose a new rounding scheme based on Bayesian learning. The article shows that the proposed method can be used to determine the security in quantum key distribution systems. In the quantum distribution the Shannon rules is applied and the factorization problem is paramount when decrypting secret keys.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
ERIC Educational Resources Information Center
Rodriguez, Gabriel R.
2017-01-01
A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…
Positive contraction mappings for classical and quantum Schrödinger systems
NASA Astrophysics Data System (ADS)
Georgiou, Tryphon T.; Pavon, Michele
2015-03-01
The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
He, Ying; Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-08-11
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value.
Liang, Bin; Yang, Jun; Li, Shunzhi; He, Jin
2017-01-01
The Iterative Closest Points (ICP) algorithm is the mainstream algorithm used in the process of accurate registration of 3D point cloud data. The algorithm requires a proper initial value and the approximate registration of two point clouds to prevent the algorithm from falling into local extremes, but in the actual point cloud matching process, it is difficult to ensure compliance with this requirement. In this paper, we proposed the ICP algorithm based on point cloud features (GF-ICP). This method uses the geometrical features of the point cloud to be registered, such as curvature, surface normal and point cloud density, to search for the correspondence relationships between two point clouds and introduces the geometric features into the error function to realize the accurate registration of two point clouds. The experimental results showed that the algorithm can improve the convergence speed and the interval of convergence without setting a proper initial value. PMID:28800096
Adaptive form-finding method for form-fixed spatial network structures
NASA Astrophysics Data System (ADS)
Lan, Cheng; Tu, Xi; Xue, Junqing; Briseghella, Bruno; Zordan, Tobia
2018-02-01
An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.
NASA Astrophysics Data System (ADS)
Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong
2018-03-01
This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willert, Jeffrey; Taitano, William T.; Knoll, Dana
In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less
Application of a GPU-Assisted Maxwell Code to Electromagnetic Wave Propagation in ITER
NASA Astrophysics Data System (ADS)
Kubota, S.; Peebles, W. A.; Woodbury, D.; Johnson, I.; Zolfaghari, A.
2014-10-01
The Low Field Side Reflectometer (LSFR) on ITER is envisioned to provide capabilities for electron density profile and fluctuations measurements in both the plasma core and edge. The current design for the Equatorial Port Plug 11 (EPP11) employs seven monostatic antennas for use with both fixed-frequency and swept-frequency systems. The present work examines the characteristics of this layout using the 3-D version of the GPU-Assisted Maxwell Code (GAMC-3D). Previous studies in this area were performed with either 2-D full wave codes or 3-D ray- and beam-tracing. GAMC-3D is based on the FDTD method and can be run with either a fixed-frequency or modulated (e.g. FMCW) source, and with either a stationary or moving target (e.g. Doppler backscattering). The code is designed to run on a single NVIDIA Tesla GPU accelerator, and utilizes a technique based on the moving window method to overcome the size limitation of the onboard memory. Effects such as beam drift, linear mode conversion, and diffraction/scattering will be examined. Comparisons will be made with beam-tracing calculations using the complex eikonal method. Supported by U.S. DoE Grants DE-FG02-99ER54527 and DE-AC02-09CH11466, and the DoE SULI Program at PPPL.
A general CPL-AdS methodology for fixing dynamic parameters in dual environments.
Huang, De-Shuang; Jiang, Wen
2012-10-01
The algorithm of Continuous Point Location with Adaptive d-ary Search (CPL-AdS) strategy exhibits its efficiency in solving stochastic point location (SPL) problems. However, there is one bottleneck for this CPL-AdS strategy which is that, when the dimension of the feature, or the number of divided subintervals for each iteration, d is large, the decision table for elimination process is almost unavailable. On the other hand, the larger dimension of the features d can generally make this CPL-AdS strategy avoid oscillation and converge faster. This paper presents a generalized universal decision formula to solve this bottleneck problem. As a matter of fact, this decision formula has a wider usage beyond handling out this SPL problems, such as dealing with deterministic point location problems and searching data in Single Instruction Stream-Multiple Data Stream based on Concurrent Read and Exclusive Write parallel computer model. Meanwhile, we generalized the CPL-AdS strategy with an extending formula, which is capable of tracking an unknown dynamic parameter λ in both informative and deceptive environments. Furthermore, we employed different learning automata in the generalized CPL-AdS method to find out if faster learning algorithm will lead to better realization of the generalized CPL-AdS method. All of these aforementioned contributions are vitally important whether in theory or in practical applications. Finally, extensive experiments show that our proposed approaches are efficient and feasible.
Feature-based three-dimensional registration for repetitive geometry in machine vision
Gong, Yuanzheng; Seibel, Eric J.
2016-01-01
As an important step in three-dimensional (3D) machine vision, 3D registration is a process of aligning two or multiple 3D point clouds that are collected from different perspectives together into a complete one. The most popular approach to register point clouds is to minimize the difference between these point clouds iteratively by Iterative Closest Point (ICP) algorithm. However, ICP does not work well for repetitive geometries. To solve this problem, a feature-based 3D registration algorithm is proposed to align the point clouds that are generated by vision-based 3D reconstruction. By utilizing texture information of the object and the robustness of image features, 3D correspondences can be retrieved so that the 3D registration of two point clouds is to solve a rigid transformation. The comparison of our method and different ICP algorithms demonstrates that our proposed algorithm is more accurate, efficient and robust for repetitive geometry registration. Moreover, this method can also be used to solve high depth uncertainty problem caused by little camera baseline in vision-based 3D reconstruction. PMID:28286703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, P; Mao, T; Gong, S
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimizationmore » trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R&D Program for Young Scientists by the Ministry of Science and Technology of China (Grant No. 2015AA020917).« less
Lv, Peijie; Liu, Jie; Zhang, Rui; Jia, Yan
2015-01-01
Objective To assess the lesion conspicuity and image quality in CT evaluation of small (≤ 3 cm) hepatocellular carcinomas (HCCs) using automatic tube voltage selection (ATVS) and automatic tube current modulation (ATCM) with or without iterative reconstruction. Materials and Methods One hundred and five patients with 123 HCC lesions were included. Fifty-seven patients were scanned using both ATVS and ATCM and images were reconstructed using either filtered back-projection (FBP) (group A1) or sinogram-affirmed iterative reconstruction (SAFIRE) (group A2). Forty-eight patients were imaged using only ATCM, with a fixed tube potential of 120 kVp and FBP reconstruction (group B). Quantitative parameters (image noise in Hounsfield unit and contrast-to-noise ratio of the aorta, the liver, and the hepatic tumors) and qualitative visual parameters (image noise, overall image quality, and lesion conspicuity as graded on a 5-point scale) were compared among the groups. Results Group A2 scanned with the automatically chosen 80 kVp and 100 kVp tube voltages ranked the best in lesion conspicuity and subjective and objective image quality (p values ranging from < 0.001 to 0.004) among the three groups, except for overall image quality between group A2 and group B (p = 0.022). Group A1 showed higher image noise (p = 0.005) but similar lesion conspicuity and overall image quality as compared with group B. The radiation dose in group A was 19% lower than that in group B (p = 0.022). Conclusion CT scanning with combined use of ATVS and ATCM and image reconstruction with SAFIRE algorithm provides higher lesion conspicuity and better image quality for evaluating small hepatic HCCs with radiation dose reduction. PMID:25995682
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Zhang, Songhe; Song, Shaoze; Wang, Bingyuan; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
Quantitative photoacoustic tomography (q-PAT) is a nontrivial technique can be used to reconstruct the absorption image with a high spatial resolution. Several attempts have been investigated by setting point sources or fixed-angle illuminations. However, in practical applications, these schemes normally suffer from low signal-to-noise ratio (SNR) or poor quantification especially for large-size domains, due to the limitation of the ANSI-safety incidence and incompleteness in the data acquisition. We herein present a q-PAT implementation that uses multi-angle light-sheet illuminations and a calibrated iterative multi-angle reconstruction. The approach can acquire more complete information on the intrinsic absorption and SNR-boosted photoacoustic signals at selected planes from the multi-angle wide-field excitations of light-sheet. Therefore, the sliced absorption maps over whole body can be recovered in a measurementflexible, noise-robust and computation-economic way. The proposed approach is validated by the phantom experiment, exhibiting promising performances in image fidelity and quantitative accuracy.
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
Heterogeneous growth-induced prestrain in the heart
Genet, M.; Rausch, M.; Lee, L.C.; Choy, S.; Zhao, X.; Kassab, G.S.; Kozerke, S.; Guccione, J.M.; Kuhl, E.
2015-01-01
Even when entirely unloaded, biological structures are not stress-free, as shown by Y.C. Fung’s seminal opening angle experiment on arteries and the left ventricle. As a result of this prestrain, subject-specific geometries extracted from medical imaging do not represent an unloaded reference configuration necessary for mechanical analysis, even if the structure is externally unloaded. Here we propose a new computational method to create physiological residual stress fields in subject-specific left ventricular geometries using the continuum theory of fictitious configurations combined with a fixed-point iteration. We also reproduced the opening angle experiment on four swine models, to characterize the range of normal opening angle values. The proposed method generates residual stress fields which can reliably reproduce the range of opening angles between 8.7±1.8 and 16.6 ± 13.7 as measured experimentally. We demonstrate that including the effects of prestrain reduces the left ventricular stiffness by up to 40%, thus facilitating the ventricular filling, which has a significant impact on cardiac function. This method can improve the fidelity of subject-specific models to improve our understanding of cardiac diseases and to optimize treatment options. PMID:25913241
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Computer-Aided Engineering of Semiconductor Integrated Circuits
1979-07-01
equation using a five point finite difference approximation. Section 4.3.6 describes the numerical techniques and iterative algorithms which are used...neighbor points. This is generally referred to as a five point finite difference scheme on a rectangular grid, as described below. The finite difference ...problems in steady state have been analyzed by the finite difference method [4. 16 ] [4.17 3 or finite element method [4. 18 3, [4. 19 3 as reported last
Minimizing hot spot temperature in asymmetric gradient coil design.
While, Peter T; Forbes, Larry K; Crozier, Stuart
2011-08-01
Heating caused by gradient coils is a considerable concern in the operation of magnetic resonance imaging (MRI) scanners. Hot spots can occur in regions where the gradient coil windings are closely spaced. These problem areas are particularly common in the design of gradient coils with asymmetrically located target regions. In this paper, an extension of an existing coil design method is described, to enable the design of asymmetric gradient coils with reduced hot spot temperatures. An improved model is presented for predicting steady-state spatial temperature distributions for gradient coils. A great amount of flexibility is afforded by this model to consider a wide range of geometries and system material properties. A feature of the temperature distribution related to the temperature gradient is used in a relaxed fixed point iteration routine for successively altering coil windings to have a lower hot spot temperature. Results show that significant reductions in peak temperature are possible at little or no cost to coil performance when compared to minimum power coils of equivalent field error.
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-01-01
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively. PMID:27222361
A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.
Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin
2016-05-25
Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.
Probabilistic cluster labeling of imagery data
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1980-01-01
The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.
NASA Astrophysics Data System (ADS)
Levine, Zachary H.; Pintar, Adam L.
2015-11-01
A simple algorithm for averaging a stochastic sequence of 1D arrays in a moving, expanding window is provided. The samples are grouped in bins which increase exponentially in size so that a constant fraction of the samples is retained at any point in the sequence. The algorithm is shown to have particular relevance for a class of Monte Carlo sampling problems which includes one characteristic of iterative reconstruction in computed tomography. The code is available in the CPC program library in both Fortran 95 and C and is also available in R through CRAN.
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
Drawing dynamical and parameters planes of iterative families and methods.
Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).
Universal single level implicit algorithm for gasdynamics
NASA Technical Reports Server (NTRS)
Lombard, C. K.; Venkatapthy, E.
1984-01-01
A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.
Alignment Solution for CT Image Reconstruction using Fixed Point and Virtual Rotation Axis.
Jun, Kyungtaek; Yoon, Seokhwan
2017-01-25
Since X-ray tomography is now widely adopted in many different areas, it becomes more crucial to find a robust routine of handling tomographic data to get better quality of reconstructions. Though there are several existing techniques, it seems helpful to have a more automated method to remove the possible errors that hinder clearer image reconstruction. Here, we proposed an alternative method and new algorithm using the sinogram and the fixed point. An advanced physical concept of Center of Attenuation (CA) was also introduced to figure out how this fixed point is applied to the reconstruction of image having errors we categorized in this article. Our technique showed a promising performance in restoring images having translation and vertical tilt errors.
On stability of fixed points and chaos in fractional systems.
Edelman, Mark
2018-02-01
In this paper, we propose a method to calculate asymptotically period two sinks and define the range of stability of fixed points for a variety of discrete fractional systems of the order 0<α<2. The method is tested on various forms of fractional generalizations of the standard and logistic maps. Based on our analysis, we make a conjecture that chaos is impossible in the corresponding continuous fractional systems.
Impurity Correction Techniques Applied to Existing Doping Measurements of Impurities in Zinc
NASA Astrophysics Data System (ADS)
Pearce, J. V.; Sun, J. P.; Zhang, J. T.; Deng, X. L.
2017-01-01
Impurities represent the most significant source of uncertainty in most metal fixed points used for the realization of the International Temperature Scale of 1990 (ITS-90). There are a number of different methods for quantifying the effect of impurities on the freezing temperature of ITS-90 fixed points, many of which rely on an accurate knowledge of the liquidus slope in the limit of low concentration. A key method of determining the liquidus slope is to measure the freezing temperature of a fixed-point material as it is progressively doped with a known amount of impurity. Recently, a series of measurements of the freezing and melting temperature of `slim' Zn fixed-point cells doped with Ag, Fe, Ni, and Pb were presented. Here, additional measurements of the Zn-X system are presented using Ga as a dopant, and the data (Zn-Ag, Zn-Fe, Zn-Ni, Zn-Pb, and Zn-Ga) have been re-analyzed to demonstrate the use of a fitting method based on Scheil solidification which is applied to both melting and freezing curves. In addition, the utility of the Sum of Individual Estimates method is explored with these systems in the context of a recently enhanced database of liquidus slopes of impurities in Zn in the limit of low concentration.
Side Effects in Time Discounting Procedures: Fixed Alternatives Become the Reference Point
2016-01-01
Typical research on intertemporal choice utilizes a two-alternative forced choice (2AFC) paradigm requiring participants to choose between a smaller sooner and larger later payoff. In the adjusting-amount procedure (AAP) one of the alternatives is fixed and the other is adjusted according to particular choices made by the participant. Such a method makes the alternatives unequal in status and is speculated to make the fixed alternative a reference point for choices, thereby affecting the decision made. The current study shows that fixing different alternatives in the AAP influences discount rates in intertemporal choices. Specifically, individuals’ (N = 283) choices were affected to just the same extent by merely fixing an alternative as when choices were preceded by scenarios explicitly imposing reference points. PMID:27768759
Solving large test-day models by iteration on data and preconditioned conjugate gradient.
Lidauer, M; Strandén, I; Mäntysaari, E A; Pösö, J; Kettunen, A
1999-12-01
A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied. An algorithm was used as a reference in which one fixed effect was solved by Gauss-Seidel method, and other effects were solved by a second-order Jacobi method. Implementation of the preconditioned conjugate gradient required storing four vectors (size equal to number of unknowns in the mixed model equations) in random access memory and reading the data at each round of iteration. The preconditioner comprised diagonal blocks of the coefficient matrix. Comparison of algorithms was based on solutions of mixed model equations obtained by a single-trait animal model and a single-trait, random regression test-day model. Data sets for both models used milk yield records of primiparous Finnish dairy cows. Animal model data comprised 665,629 lactation milk yields and random regression test-day model data of 6,732,765 test-day milk yields. Both models included pedigree information of 1,099,622 animals. The animal model ¿random regression test-day model¿ required 122 ¿305¿ rounds of iteration to converge with the reference algorithm, but only 88 ¿149¿ were required with the preconditioned conjugate gradient. To solve the random regression test-day model with the preconditioned conjugate gradient required 237 megabytes of random access memory and took 14% of the computation time needed by the reference algorithm.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Singular boundary method for global gravity field modelling
NASA Astrophysics Data System (ADS)
Cunderlik, Robert
2014-05-01
The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Analyzing survival curves at a fixed point in time for paired and clustered right-censored data
Su, Pei-Fang; Chi, Yunchan; Lee, Chun-Yi; Shyr, Yu; Liao, Yi-De
2018-01-01
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Rather than comparing entire survival curves, researchers can focus on the comparison at fixed time points that may have a clinical utility for patients. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on some transformations of the Kaplan-Meier estimators of the survival function. However, to compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, their approach would need to be modified. In this paper, we extend the statistics to accommodate the possible within-paired correlation and within-clustered correlation, respectively. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets. PMID:29456280
Drawing Dynamical and Parameters Planes of Iterative Families and Methods
Chicharro, Francisco I.
2013-01-01
The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386
Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.
Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo
2013-06-20
A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Retinal biometrics based on Iterative Closest Point algorithm.
Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi
2017-07-01
The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birge, J. R.; Qi, L.; Wei, Z.
In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) ifmore » an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {l_brace}z {r_brace} , f{sub 0}(y) {ne} f{sub 0}(z) , where f{sub 0} is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm.« less
Ambiguity resolution in systems using Omega for position location
NASA Technical Reports Server (NTRS)
Frenkel, G.; Gan, D. G.
1974-01-01
The lane ambiguity problem prevents the utilization of the Omega system for many applications such as locating buoys and balloons. The method of multiple lines of position introduced herein uses signals from four or more Omega stations for ambiguity resolution. The coordinates of the candidate points are determined first through the use of the Newton iterative procedure. Subsequently, a likelihood function is generated for each point, and the ambiguity is resolved by selecting the most likely point. The method was tested through simulation.
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
APMP Scale Comparison with Three Radiation Thermometers and Six Fixed-Point Blackbodies
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimizu, Y.; Ishii, J.
2015-08-01
New Asia Pacific Metrology Programme (APMP) comparisons of radiation thermometry standards, APMP TS-11, and -12, have recently been initiated. These new APMP comparisons cover the temperature range from to . Three radiation thermometers with central wavelengths of 1.6 , 0.9 , and 0.65 are the transfer devices for the radiation thermometer scale comparison conducted in the so-called star configuration. In parallel, a compact fixed-point blackbody furnace that houses six types of fixed-point cells of In, Sn, Zn, Al, Ag, and Cu is circulated, again in a star-type comparison, to substantiate fixed-point calibration capabilities. Twelve APMP national metrology institutes are taking part in this endeavor, in which the National Metrology Institute of Japan acts as the pilot. In this article, the comparison scheme is described with emphasis on the features of the transfer devices, i.e., the radiation thermometers and the fixed-point blackbodies. Results of preliminary evaluations of the performance and characteristic of these instruments as well as the evaluation method of the comparison results are presented.
Long-Term Stability of WC-C Peritectic Fixed Point
NASA Astrophysics Data System (ADS)
Khlevnoy, B. B.; Grigoryeva, I. A.
2015-03-01
The tungsten carbide-carbon peritectic (WC-C) melting transition is an attractive high-temperature fixed point with a temperature of . Earlier investigations showed high repeatability, small melting range, low sensitivity to impurities, and robustness of WC-C that makes it a prospective candidate for the highest fixed point of the temperature scale. This paper presents further study of the fixed point, namely the investigation of the long-term stability of the WC-C melting temperature. For this purpose, a new WC-C cell of the blackbody type was built using tungsten powder of 99.999 % purity. The stability of the cell was investigated during the cell aging for 50 h at the cell working temperature that tooks 140 melting/freezing cycles. The method of investigation was based on the comparison of the WC-C tested cell with a reference Re-C fixed-point cell that reduces an influence of the probable instability of a radiation thermometer. It was shown that after the aging period, the deviation of the WC-C cell melting temperature was with an uncertainty of.
Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm
NASA Astrophysics Data System (ADS)
Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun
2017-02-01
We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.
Interior-Point Methods for Linear Programming: A Challenge to the Simplex Method
1988-07-01
subsequently found that the method was first proposed by Dikin in 1967 [6]. Search directions are generated by the same system (5). Any hint of quadratic...1982). Inexact Newton methods, SIAM Journal on Numerical Analysis 19, 400-408. [6] I. I. Dikin (1967). Iterative solution of problems of linear and
Digital micromirror device as amplitude diffuser for multiple-plane phase retrieval
NASA Astrophysics Data System (ADS)
Abregana, Timothy Joseph T.; Hermosa, Nathaniel P.; Almoro, Percival F.
2017-06-01
Previous implementations of the phase diffuser used in the multiple-plane phase retrieval method included a diffuser glass plate with fixed optical properties or a programmable yet expensive spatial light modulator. Here a model for phase retrieval based on a digital micromirror device as amplitude diffuser is presented. The technique offers programmable, convenient and low-cost amplitude diffuser for a non-stagnating iterative phase retrieval. The technique is demonstrated in the reconstructions of smooth object wavefronts.
NASA Astrophysics Data System (ADS)
Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.
2018-01-01
In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.
Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows
Thomas B. Lynch; David Hamlin; Mark J. Ducey
2016-01-01
Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Computationally efficient finite-difference modal method for the solution of Maxwell's equations.
Semenikhin, Igor; Zanuccoli, Mauro
2013-12-01
In this work, a new implementation of the finite-difference (FD) modal method (FDMM) based on an iterative approach to calculate the eigenvalues and corresponding eigenfunctions of the Helmholtz equation is presented. Two relevant enhancements that significantly increase the speed and accuracy of the method are introduced. First of all, the solution of the complete eigenvalue problem is avoided in favor of finding only the meaningful part of eigenmodes by using iterative methods. Second, a multigrid algorithm and Richardson extrapolation are implemented. Simultaneous use of these techniques leads to an enhancement in terms of accuracy, which allows a simple method such as the FDMM with a typical three-point difference scheme to be significantly competitive with an analytical modal method.
Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.
Hart, Michael L.; Drakopoulos, Michael; Reinhard, Christina; Connolley, Thomas
2013-01-01
A complete calibration method to characterize a static planar two-dimensional detector for use in X-ray diffraction at an arbitrary wavelength is described. This method is based upon geometry describing the point of intersection between a cone’s axis and its elliptical conic section. This point of intersection is neither the ellipse centre nor one of the ellipse focal points, but some other point which lies in between. The presented solution is closed form, algebraic and non-iterative in its application, and gives values for the X-ray beam energy, the sample-to-detector distance, the location of the beam centre on the detector surface and the detector tilt relative to the incident beam. Previous techniques have tended to require prior knowledge of either the X-ray beam energy or the sample-to-detector distance, whilst other techniques have been iterative. The new calibration procedure is performed by collecting diffraction data, in the form of diffraction rings from a powder standard, at known displacements of the detector along the beam path. PMID:24068840
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, C. Kristopher; Hauck, Cory D.
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Yu, Yu; Zhang, Pengjie, E-mail: lezhang@sjtu.edu.cn
Photo- z error is one of the major sources of systematics degrading the accuracy of weak-lensing cosmological inferences. Zhang et al. proposed a self-calibration method combining galaxy–galaxy correlations and galaxy–shear correlations between different photo- z bins. Fisher matrix analysis shows that it can determine the rate of photo- z outliers at a level of 0.01%–1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negativemore » matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true- z and photo- z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak-lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01%–1%, and the true mean redshifts of photo- z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.« less
Wu, Huai-Ning; Luo, Biao
2012-12-01
It is well known that the nonlinear H∞ state feedback control problem relies on the solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial differential equation that has proven to be impossible to solve analytically. In this paper, a neural network (NN)-based online simultaneous policy update algorithm (SPUA) is developed to solve the HJI equation, in which knowledge of internal system dynamics is not required. First, we propose an online SPUA which can be viewed as a reinforcement learning technique for two players to learn their optimal actions in an unknown environment. The proposed online SPUA updates control and disturbance policies simultaneously; thus, only one iterative loop is needed. Second, the convergence of the online SPUA is established by proving that it is mathematically equivalent to Newton's method for finding a fixed point in a Banach space. Third, we develop an actor-critic structure for the implementation of the online SPUA, in which only one critic NN is needed for approximating the cost function, and a least-square method is given for estimating the NN weight parameters. Finally, simulation studies are provided to demonstrate the effectiveness of the proposed algorithm.
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
Garrett, C. Kristopher; Hauck, Cory D.
2018-04-05
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
Experimental realization of self-guided quantum coherence freezing
NASA Astrophysics Data System (ADS)
Yu, Shang; Wang, Yi-Tao; Ke, Zhi-Jin; Liu, Wei; Zhang, Wen-Hao; Chen, Geng; Tang, Jian-Shun; Li, Chuan-Feng; Guo, Guang-Can
2017-12-01
Quantum coherence is the most essential characteristic of quantum physics, specifcially, when it is subject to the resource-theoretical framework, it is considered as the most fundamental resource for quantum techniques. Other quantum resources, e.g., entanglement, are all based on coherence. Therefore, it becomes urgently important to learn how to preserve coherence in quantum channels. The best preservation is coherence freezing, which has been studied recently. However, in these studies, the freezing condition is theoretically calculated, and there still lacks a practical way to achieve this freezing; in addition the channels are usually fixed, but actually, there are also degrees of freedom that can be used to adapt the channels to quantum states. Here we develop a self-guided quantum coherence freezing method, which can guide either the quantum channels (tunable-channel scheme with upgraded channels) or the initial state (fixed-channel scheme) to the coherence-freezing zone from any starting estimate. Specifically, in the fixed-channel scheme, the final-iterative quantum states all satisfy the previously calculated freezing condition. This coincidence demonstrates the validity of our method. Our work will be helpful for the better protection of quantum coherence.
Liu, Ruijie Rachel; Erwin, William D
2006-08-01
An algorithm was developed to estimate noncircular orbit (NCO) single-photon emission computed tomography (SPECT) detector radius on a SPECT/CT imaging system using the CT images, for incorporation into collimator resolution modeling for iterative SPECT reconstruction. Simulated male abdominal (arms up), male head and neck (arms down) and female chest (arms down) anthropomorphic phantom, and ten patient, medium-energy SPECT/CT scans were acquired on a hybrid imaging system. The algorithm simulated inward SPECT detector radial motion and object contour detection at each projection angle, employing the calculated average CT image and a fixed Hounsfield unit (HU) threshold. Calculated radii were compared to the observed true radii, and optimal CT threshold values, corresponding to patient bed and clothing surfaces, were found to be between -970 and -950 HU. The algorithm was constrained by the 45 cm CT field-of-view (FOV), which limited the detected radii to < or = 22.5 cm and led to occasional radius underestimation in the case of object truncation by CT. Two methods incorporating the algorithm were implemented: physical model (PM) and best fit (BF). The PM method computed an offset that produced maximum overlap of calculated and true radii for the phantom scans, and applied that offset as a calculated-to-true radius transformation. For the BF method, the calculated-to-true radius transformation was based upon a linear regression between calculated and true radii. For the PM method, a fixed offset of +2.75 cm provided maximum calculated-to-true radius overlap for the phantom study, which accounted for the camera system's object contour detect sensor surface-to-detector face distance. For the BF method, a linear regression of true versus calculated radius from a reference patient scan was used as a calculated-to-true radius transform. Both methods were applied to ten patient scans. For -970 and -950 HU thresholds, the combined overall average root-mean-square (rms) error in radial position for eight patient scans without truncation were 3.37 cm (12.9%) for PM and 1.99 cm (8.6%) for BF, indicating BF is superior to PM in the absence of truncation. For two patient scans with truncation, the rms error was 3.24 cm (12.2%) for PM and 4.10 cm (18.2%) for BF. The slightly better performance of PM in the case of truncation is anomalous, due to FOV edge truncation artifacts in the CT reconstruction, and thus is suspect. The calculated NCO contour for a patient SPECT/CT scan was used with an iterative reconstruction algorithm that incorporated compensation for system resolution. The resulting image was qualitatively superior to the image obtained by reconstructing the data using the fixed radius stored by the scanner. The result was also superior to the image reconstructed using the iterative algorithm provided with the system, which does not incorporate resolution modeling. These results suggest that, under conditions of no or only mild lateral truncation of the CT scan, the algorithm is capable of providing radius estimates suitable for iterative SPECT reconstruction collimator geometric resolution modeling.
Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue
Gerdes, Michael J.; Sevinsky, Christopher J.; Sood, Anup; Adak, Sudeshna; Bello, Musodiq O.; Bordwell, Alexander; Can, Ali; Corwin, Alex; Dinn, Sean; Filkins, Robert J.; Hollman, Denise; Kamath, Vidya; Kaanumalle, Sireesha; Kenny, Kevin; Larsen, Melinda; Lazare, Michael; Lowes, Christina; McCulloch, Colin C.; McDonough, Elizabeth; Pang, Zhengyu; Rittscher, Jens; Santamaria-Pang, Alberto; Sarachan, Brion D.; Seel, Maximilian L.; Seppo, Antti; Shaikh, Kashan; Sui, Yunxia; Zhang, Jingyu; Ginty, Fiona
2013-01-01
Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics. PMID:23818604
Highly multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue.
Gerdes, Michael J; Sevinsky, Christopher J; Sood, Anup; Adak, Sudeshna; Bello, Musodiq O; Bordwell, Alexander; Can, Ali; Corwin, Alex; Dinn, Sean; Filkins, Robert J; Hollman, Denise; Kamath, Vidya; Kaanumalle, Sireesha; Kenny, Kevin; Larsen, Melinda; Lazare, Michael; Li, Qing; Lowes, Christina; McCulloch, Colin C; McDonough, Elizabeth; Montalto, Michael C; Pang, Zhengyu; Rittscher, Jens; Santamaria-Pang, Alberto; Sarachan, Brion D; Seel, Maximilian L; Seppo, Antti; Shaikh, Kashan; Sui, Yunxia; Zhang, Jingyu; Ginty, Fiona
2013-07-16
Limitations on the number of unique protein and DNA molecules that can be characterized microscopically in a single tissue specimen impede advances in understanding the biological basis of health and disease. Here we present a multiplexed fluorescence microscopy method (MxIF) for quantitative, single-cell, and subcellular characterization of multiple analytes in formalin-fixed paraffin-embedded tissue. Chemical inactivation of fluorescent dyes after each image acquisition round allows reuse of common dyes in iterative staining and imaging cycles. The mild inactivation chemistry is compatible with total and phosphoprotein detection, as well as DNA FISH. Accurate computational registration of sequential images is achieved by aligning nuclear counterstain-derived fiducial points. Individual cells, plasma membrane, cytoplasm, nucleus, tumor, and stromal regions are segmented to achieve cellular and subcellular quantification of multiplexed targets. In a comparison of pathologist scoring of diaminobenzidine staining of serial sections and automated MxIF scoring of a single section, human epidermal growth factor receptor 2, estrogen receptor, p53, and androgen receptor staining by diaminobenzidine and MxIF methods yielded similar results. Single-cell staining patterns of 61 protein antigens by MxIF in 747 colorectal cancer subjects reveals extensive tumor heterogeneity, and cluster analysis of divergent signaling through ERK1/2, S6 kinase 1, and 4E binding protein 1 provides insights into the spatial organization of mechanistic target of rapamycin and MAPK signal transduction. Our results suggest MxIF should be broadly applicable to problems in the fields of basic biological research, drug discovery and development, and clinical diagnostics.
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
USDA-ARS?s Scientific Manuscript database
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises t...
Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei
2017-08-01
Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal surgery through minimal invasion or natural orifices. The study showed that the proposed geometric method can resolve IK of the snake-like robot with negligible error offset. Evaluation against well-known methods shows that the proposed method can reach several points in the robot's workspace with high accuracy and shorter computational time, simultaneously.
Uniscale multi-view registration using double dog-leg method
NASA Astrophysics Data System (ADS)
Chen, Chao-I.; Sargent, Dusty; Tsai, Chang-Ming; Wang, Yuan-Fang; Koppel, Dan
2009-02-01
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
Monte Carlo approaches to sampling forested tracts with lines or points
Harry T. Valentine; Jeffrey H. Gove; Timothy G. Gregoire
2001-01-01
Several line- and point-based sampling methods can be employed to estimate the aggregate dimensions of trees standing on a forested tract or pieces of coarse woody debris lying on the forest floor. Line methods include line intersect sampling, horizontal line sampling, and transect relascope sampling; point methods include variable- and fixed-radius plot sampling, and...
Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.
Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I
2002-01-01
There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.
Evolution families of conformal mappings with fixed points and the Löwner-Kufarev equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goryainov, V V
2015-01-31
The paper is concerned with evolution families of conformal mappings of the unit disc to itself that fix an interior point and a boundary point. Conditions are obtained for the evolution families to be differentiable, and an existence and uniqueness theorem for an evolution equation is proved. A convergence theorem is established which describes the topology of locally uniform convergence of evolution families in terms of infinitesimal generating functions. The main result in this paper is the embedding theorem which shows that any conformal mapping of the unit disc to itself with two fixed points can be embedded into a differentiable evolution familymore » of such mappings. This result extends the range of the parametric method in the theory of univalent functions. In this way the problem of the mutual change of the derivative at an interior point and the angular derivative at a fixed point on the boundary is solved for a class of mappings of the unit disc to itself. In particular, the rotation theorem is established for this class of mappings. Bibliography: 27 titles.« less
User-centered design and evaluation of a next generation fixed-split ergonomic keyboard.
McLoone, Hugh E; Jacobson, Melissa; Hegg, Chau; Johnson, Peter W
2010-01-01
Research has shown that fixed-split, ergonomic keyboards lessen the pain and functional status in symptomatic individuals as well as reduce the likelihood of developing musculoskeletal disorders in asymptomatic typists over extended use. The goal of this study was to evaluate design features to determine whether the current fixed-split ergonomic keyboard design could be improved. Thirty-nine, adult-aged, fixed-split ergonomic keyboard users were recruited to participate in one of three studies. First utilizing non-functional models and later a functional prototype, three studies evaluated keyboard design features including: 1) keyboard lateral inclination, 2) wrist rest height, 3) keyboard slope, and 4) curved "gull-wing" key layouts. The findings indicated that keyboard lateral inclination could be increased from 8° to 14°; wrist rest height could be increased up to 10 mm from current setting; positive, flat, and negative slope settings were equally preferred and facilitated greater postural variation; and participants preferred a new gull-wing key layout. The design changes reduced forearm pronation and wrist extension while not adversely affecting typing performance. This research demonstrated how iterative-evaluative, user-centered research methods can be utilized to improve a product's design such as a fixed-split ergonomic keyboard.
NASA Astrophysics Data System (ADS)
Zamzamir, Zamzana; Murid, Ali H. M.; Ismail, Munira
2014-06-01
Numerical solution for uniquely solvable exterior Riemann-Hilbert problem on region with corners at offcorner points has been explored by discretizing the related integral equation using Picard iteration method without any modifications to the left-hand side (LHS) and right-hand side (RHS) of the integral equation. Numerical errors for all iterations are converge to the required solution. However, for certain problems, it gives lower accuracy. Hence, this paper presents a new numerical approach for the problem by treating the generalized Neumann kernel at LHS and the function at RHS of the integral equation. Due to the existence of the corner points, Gaussian quadrature is employed which avoids the corner points during numerical integration. Numerical example on a test region is presented to demonstrate the effectiveness of this formulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakashima, Hiroyuki; Hijikata, Yuh; Nakatsuji, Hiroshi
2008-04-21
Very accurate variational calculations with the free iterative-complement-interaction (ICI) method for solving the Schroedinger equation were performed for the 1sNs singlet and triplet excited states of helium atom up to N=24. This is the first extensive applications of the free ICI method to the calculations of excited states to very high levels. We performed the calculations with the fixed-nucleus Hamiltonian and moving-nucleus Hamiltonian. The latter case is the Schroedinger equation for the electron-nuclear Hamiltonian and includes the quantum effect of nuclear motion. This solution corresponds to the nonrelativistic limit and reproduced the experimental values up to five decimal figures. Themore » small differences from the experimental values are not at all the theoretical errors but represent the physical effects that are not included in the present calculations, such as relativistic effect, quantum electrodynamic effect, and even the experimental errors. The present calculations constitute a small step toward the accurately predictive quantum chemistry.« less
Deconvolution of interferometric data using interior point iterative algorithms
NASA Astrophysics Data System (ADS)
Theys, C.; Lantéri, H.; Aime, C.
2016-09-01
We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.
ERIC Educational Resources Information Center
Bellver-Cebreros, Consuelo; Rodriguez-Danta, Marcelo
2009-01-01
An apparently unnoticed analogy between the torque-free motion of a rotating rigid body about a fixed point and the propagation of light in anisotropic media is stated. First, a new plane construction for visualizing this torque-free motion is proposed. This method uses an intrinsic representation alternative to angular momentum and independent of…
Traveling-cluster approximation for uncorrelated amorphous systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, A.K.; Mills, R.; Kaplan, T.
1984-11-15
We have developed a formalism for including cluster effects in the one-electron Green's function for a positionally disordered (liquid or amorphous) system without any correlation among the scattering sites. This method is an extension of the technique known as the traveling-cluster approximation (TCA) originally obtained and applied to a substitutional alloy by Mills and Ratanavararaksa. We have also proved the appropriate fixed-point theorem, which guarantees, for a bounded local potential, that the self-consistent equations always converge upon iteration to a unique, Herglotz solution. To our knowledge, this is the only analytic theory for considering cluster effects. Furthermore, we have performedmore » some computer calculations in the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results have been compared with ''exact calculations'' (which, in principle, take into account all cluster effects) and with the coherent-potential approximation (CPA), which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA and yet, apparently, the pair approximation distorts some of the features of the exact results.« less
Symmetry dependence of holograms for optical trapping
NASA Astrophysics Data System (ADS)
Curtis, Jennifer E.; Schmitz, Christian H. J.; Spatz, Joachim P.
2005-08-01
No iterative algorithm is necessary to calculate holograms for most holographic optical trapping patterns. Instead, holograms may be produced by a simple extension of the prisms-and-lenses method. This formulaic approach yields the same diffraction efficiency as iterative algorithms for any asymmetric or symmetric but nonperiodic pattern of points while requiring less calculation time. A slight spatial disordering of periodic patterns significantly reduces intensity variations between the different traps without extra calculation costs. Eliminating laborious hologram calculations should greatly facilitate interactive holographic trapping.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions
Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.
2007-01-01
Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.
Deylami, Ali Mohades; Asl, Babak Mohammadzadeh
2018-06-04
Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
NASA Astrophysics Data System (ADS)
Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.
2014-03-01
Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.
NASA Astrophysics Data System (ADS)
Pearce, Jonathan V.; Gisby, John A.; Steur, Peter P. M.
2016-08-01
A knowledge of the effect of impurities at the level of parts per million on the freezing temperature of very pure metals is essential for realisation of ITS-90 fixed points. New information has become available for use with the thermodynamic modelling software MTDATA, permitting calculation of liquidus slopes, in the low concentration limit, of a wider range of binary alloy systems than was previously possible. In total, calculated values for 536 binary systems are given. In addition, new experimental determinations of phase diagrams, in the low impurity concentration limit, have recently appeared. All available data have been combined to provide a comprehensive set of liquidus slopes for impurities in ITS-90 metal fixed points. In total, liquidus slopes for 838 systems are tabulated for the fixed points Hg, Ga, In, Sn, Zn, Al, Ag, Au, and Cu. It is shown that the value of the liquidus slope as a function of impurity element atomic number can be approximated using a simple formula, and good qualitative agreement with the existing data is observed for the fixed points Al, Ag, Au and Cu, but curiously the formula is not applicable to the fixed points Hg, Ga, In, Sn, and Zn. Some discussion is made concerning the influence of oxygen on the liquidus slopes, and some calculations using MTDATA are discussed. The BIPM’s consultative committee for thermometry has long recognised that the sum of individual estimates method is the ideal approach for assessing uncertainties due to impurities, but the community has been largely powerless to use the model due to lack of data. Here, not only is data provided, but a simple model is given to enable known thermophysical data to be used directly to estimate impurity effects for a large fraction of the ITS-90 fixed points.
Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui
2017-08-01
For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.
Method of controlling chaos in laser equations
NASA Astrophysics Data System (ADS)
Duong-van, Minh
1993-01-01
A method of controlling chaotic to laminar flows in the Lorenz equations using fixed points dictated by minimizing the Lyapunov functional was proposed by Singer, Wang, and Bau [Phys. Rev. Lett. 66, 1123 (1991)]. Using different fixed points, we find that the solutions in a chaotic regime can also be periodic. Since the laser equations are isomorphic to the Lorenz equations we use this method to control chaos when the laser is operated over the pump threshold. Furthermore, by solving the laser equations with an occasional proportional feedback mechanism, we recover the essential laser controlling features experimentally discovered by Roy, Murphy, Jr., Maier, Gills, and Hunt [Phys. Rev. Lett. 68, 1259 (1992)].
NASA Astrophysics Data System (ADS)
Wang, Q.; Alfalou, A.; Brosseau, C.
2016-04-01
Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.
Fast sweeping method for the factored eikonal equation
NASA Astrophysics Data System (ADS)
Fomel, Sergey; Luo, Songting; Zhao, Hongkai
2009-09-01
We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.
2014-03-01
Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.
[Treatment of calcaneal avulsion fractures with twinfix suture anchors fixation].
Zhao, Bin-xiu; Wang, Kun-zheng; Wang, Chun-sheng; Xie, Yue; Dai, Zhi-tang; Liu, Gang; Liu, Wei-dong
2011-06-01
For the calcaneal avulsion fracture, the current method is more commonly used screws or Kirschner wire to fix fracture fragment. This article intended to explore the feasibility and clinical efficacy for the treatment of avulsion fractures with TwinFix suture anchors. From July 2007 to November 2010, 21 patients were reviewed, including 15 males and 6 females, ranging in age from 49 to 65 years,with a mean of 58.7 years. Twelve patients had nodules in the right heel and 9 patients had nodules in the left heel. All the patients had closed fractures. The typical preoperative symptoms of the patients included pain in the upper heel and weak in heel lift. Body examination results: palpable sense of bone rubbing in the back of the heel, and swelling in the heel. Surgery treatment with TwinFix suture anchors performed as follows : to fix TwinFix suture anchors into the calcaneal body, then to drill the fracture block, to make the double strand suture through the fracture holes, to knot the suture eachother to fix the block, and to use stitch to fix the remaining suture in the Achilles tendon in order to improve the block fixation. The criteria of the AOFAS Foot and Ankle Surgery by the United States Association of ankle-rear foot functional recovery was used to evaluate the Achilles tendon. Total average score was (95.5 +/- 3.12) points, including pain items of(38.5 +/- 2.18) points,the average score of functional items of (49.5 +/- 3.09) points,and power lines of 10 points in all patients. Twenty-one patients got an excellent result, 16 good and 5 poor. The methods of treatment for the calcaneal avulsion fractures with TwinFix suture anchors is a simple operation, and have excellent clinical effect, which is worthy of promotion.
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Zhou, Zhaoyan; Wei, Hui; Lin, C. D.
2017-08-01
We provide a unified theoretical framework for recently emerging experiments that retrieve fixed-in-space molecular information through time-domain rotational coherence spectroscopy. Unlike a previous approach by Makhija et al. (V. Makhija et al., arXiv:1611.06476), our method can be applied to the retrieval of both real-valued (e.g., ionization yield) and complex-valued (e.g., induced dipole moment) molecular response information. It is also a direct retrieval method without using iterations. We also demonstrate that experimental parameters, such as the fluence of the aligning laser pulse and the rotational temperature of the molecular ensemble, can be quite accurately determined using a statistical method.
Strategies for efficient resolution analysis in full-waveform inversion
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Leeuwen, T.; Trampert, J.
2016-12-01
Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2009-12-01
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
Multi-Maneuver Clohessy-Wiltshire Targeting
NASA Technical Reports Server (NTRS)
Dannemiller, David P.
2011-01-01
Orbital rendezvous involves execution of a sequence of maneuvers by a chaser vehicle to bring the chaser to a desired state relative to a target vehicle while meeting intermediate and final relative constraints. Intermediate and final relative constraints are necessary to meet a multitude of requirements such as to control approach direction, ensure relative position is adequate for operation of space-to-space communication systems and relative sensors, provide fail-safe trajectory features, and provide contingency hold points. The effect of maneuvers on constraints is often coupled, so the maneuvers must be solved for as a set. For example, maneuvers that affect orbital energy change both the chaser's height and downrange position relative to the target vehicle. Rendezvous designers use experience and rules-of-thumb to design a sequence of maneuvers and constraints. A non-iterative method is presented for targeting a rendezvous scenario that includes a sequence of maneuvers and relative constraints. This method is referred to as Multi-Maneuver Clohessy-Wiltshire Targeting (MM_CW_TGT). When a single maneuver is targeted to a single relative position, the classic CW targeting solution is obtained. The MM_CW_TGT method involves manipulation of the CW state transition matrix to form a linear system. As a starting point for forming the algorithm, the effects of a series of impulsive maneuvers on the state are derived. Simple and moderately complex examples are used to demonstrate the pattern of the resulting linear system. The general form of the pattern results in an algorithm for formation of the linear system. The resulting linear system relates the effect of maneuver components and initial conditions on relative constraints specified by the rendezvous designer. Solution of the linear system includes the straight-forward inverse of a square matrix. Inversion of the square matrix is assured if the designer poses a controllable scenario - a scenario where the the constraints can be met by the sequence of maneuvers. Matrices in the linear system are dependent on selection of maneuvers and constraints by the designer, but the matrices are independent of the chaser's initial conditions. For scenarios where the sequence of maneuvers and constraints are fixed, the linear system can be formed and the square matrix inverted prior to real-time operations. Example solutions are presented for several rendezvous scenarios to illustrate the utility of the method. The MM_CW_TGT method has been used during the preliminary design of rendezvous scenarios and is expected to be useful for iterative methods in the generation of an initial guess and corrections.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
NASA Astrophysics Data System (ADS)
Rocco, Emr; Prado, Afbap; Souza, Mlos
In this work, the problem of bi-impulsive orbital transfers between coplanar elliptical orbits with minimum fuel consumption but with a time limit for this transfer is studied. As a first method, the equations presented by Lawden (1993) were used. Those equations furnishes the optimal transfer orbit with fixed time for this transfer, between two elliptical coplanar orbits considering fixed terminal points. The method was adapted to cases with free terminal points and those equations was solved to develop a software for orbital maneuvers. As a second method, the equations presented by Eckel and Vinh (1984) were used, those equations provide the transfer orbit between non-coplanar elliptical orbits with minimum fuel and fixed time transfer, or minimum time transfer for a prescribed fuel consumption, considering free terminal points. But in this work only the problem with fixed time transfer was considered, the case of minimum time for a prescribed fuel consumption was already studied in Rocco et al. (2000). Then, the method was modified to consider cases of coplanar orbital transfer, and develop a software for orbital maneuvers. Therefore, two software that solve the same problem using different methods were developed. The first method, presented by Lawden, uses the primer vector theory. The second method, presented by Eckel and Vinh, uses the ordinary theory of maxima and minima. So, to test the methods we choose the same terminal orbits and the same time as input. We could verify that we didn't obtain exactly the same result. In this work, that is an extension of Rocco et al. (2002), these differences in the results are explored with objective of determining the reason of the occurrence of these differences and which modifications should be done to eliminate them.
Performance Analysis of Entropy Methods on K Means in Clustering Process
NASA Astrophysics Data System (ADS)
Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib
2017-12-01
K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.
NASA Astrophysics Data System (ADS)
Ngastiti, P. T. B.; Surarso, Bayu; Sutimin
2018-05-01
Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.
Carbon fiber composites application in ITER plasma facing components
NASA Astrophysics Data System (ADS)
Barabash, V.; Akiba, M.; Bonal, J. P.; Federici, G.; Matera, R.; Nakamura, K.; Pacher, H. D.; Rödig, M.; Vieider, G.; Wu, C. H.
1998-10-01
Carbon Fiber Composites (CFCs) are one of the candidate armour materials for the plasma facing components of the International Thermonuclear Experimental Reactor (ITER). For the present reference design, CFC has been selected as armour for the divertor target near the plasma strike point mainly because of unique resistance to high normal and off-normal heat loads. It does not melt under disruptions and might have higher erosion lifetime in comparison with other possible armour materials. Issues related to CFC application in ITER are described in this paper. They include erosion lifetime, tritium codeposition with eroded material and possible methods for the removal of the codeposited layers, neutron irradiation effect, development of joining technologies with heat sink materials, and thermomechanical performance. The status of the development of new advanced CFCs for ITER application is also described. Finally, the remaining R&D needs are critically discussed.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
SU-E-T-217: Intrinsic Respiratory Gating in Small Animal CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Smith, M; Mistry, N
Purpose: Preclinical animal models of lung cancer can provide a controlled test-bed for testing dose escalation or function-based-treatment-planning studies. However, to extract lung function, i.e. ventilation, one needs to be able to image the lung at different phases of ventilation (in-hale / ex-hale). Most respiratory-gated imaging using micro-CT involves using an external ventilator and surgical intervention limiting the utility in longitudinal studies. A new intrinsic respiratory retrospective gating method was developed and tested in mice. Methods: A fixed region of interest (ROI) that covers the diaphragm was selected on all projection images to estimate the mean intensity (M). The meanmore » intensity depends on the projection angle and diaphragm position. A 3-point moving average (A) of consecutive M values: Mpre, Mcurrent and Mpost, was calculated to be subtracted from Mcurrent. A fixed threshold was used to enable amplitude based sorting into 4 different phases of respiration. Images at full-inhale and end-exhale phases of respiration were reconstructed using the open source OSCaR. Lung volumes estimated at the 2 phases of respiration were validated against literature values. Results: Intrinsic retrospective gating was accomplished without the use of any external breathing waveform. While projection images were acquired at 360 different angles. Only 138 and 104 projections were used to reconstruct images at full-inhale and end-exhale. This often results in non-uniform under-sampled angular projections leading to some minor streaking artifacts. The calculated expiratory, inspiratory and tidal lung volumes correlated well with the values known from the literature. Conclusion: Our initial result demonstrates an intrinsic gating method that is suitable for flat panel cone beam small animal CT systems. Reduction in streaking artifacts can be accomplished by oversampling the data or using iterative reconstruction methods. This initial experience will enable freebreathing small animal micro-CT imaging to fuel longitudinal studies of lung function.« less
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...
2016-12-02
Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Gregory H.
2003-08-06
In this paper we present a general iterative method for the solution of the Riemann problem for hyperbolic systems of PDEs. The method is based on the multiple shooting method for free boundary value problems. We demonstrate the method by solving one-dimensional Riemann problems for hyperelastic solid mechanics. Even for conditions representative of routine laboratory conditions and military ballistics, dramatic differences are seen between the exact and approximate Riemann solution. The greatest discrepancy arises from misallocation of energy between compressional and thermal modes by the approximate solver, resulting in nonphysical entropy and temperature estimates. Several pathological conditions arise in commonmore » practice, and modifications to the method to handle these are discussed. These include points where genuine nonlinearity is lost, degeneracies, and eigenvector deficiencies that occur upon melting.« less
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries
A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less
Composing chaotic music from the letter m
NASA Astrophysics Data System (ADS)
Sotiropoulos, Anastasios D.
Chaotic music is composed from a proposed iterative map depicting the letter m, relating the pitch, duration and loudness of successive steps. Each of the two curves of the letter m is based on the classical logistic map. Thus, the generating map is xn+1 = r xn(1/2 - xn) for xn between 0 and 1/2 defining the first curve, and xn+1 = r (xn - 1/2)(1 - xn) for xn between 1/2 and 1 representing the second curve. The parameter r which determines the height(s) of the letter m varies from 2 to 16, the latter value ensuring fully developed chaotic solutions for the whole letter m; r = 8 yielding full chaotic solutions only for its first curve. The m-model yields fixed points, bifurcation points and chaotic regions for each separate curve, as well as values of the parameter r greater than 8 which produce inter-fixed points, inter-bifurcation points and inter-chaotic regions from the interplay of the two curves. Based on this, music is composed from mapping the m- recurrence model solutions onto actual notes. The resulting musical score strongly depends on the sequence of notes chosen by the composer to define the musical range corresponding to the range of the chaotic mathematical solutions x from 0 to 1. Here, two musical ranges are used; one is the middle chromatic scale and the other is the seven- octaves range. At the composer's will and, for aesthetics, within the same composition, notes can be the outcome of different values of r and/or shifted in any octave. Compositions with endings of non-repeating note patterns result from values of r in the m-model that do not produce bifurcations. Scores of chaotic music composed from the m-model and the classical logistic model are presented.
NASA Astrophysics Data System (ADS)
Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka
Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.
NASA Astrophysics Data System (ADS)
Duong-van, Minh
1993-11-01
A method of controlling chaotic to laminar flows in the Lorenz equations using fixed points dictated by minimizing the Lyapunov functional was proposed by Singer, Wang and Bau. Using different fixed points, we find that the solutions in a chaotic regime can also be periodic. Since the lasers equations are isomorphic to the Lorenz equations, we use this new method to control chaos when the laser is operated over the pump threshold. Furthermore, by solving the laser equations with an occasional proportional feedback mechanism, we recover the essential lasers controlling features experimentally discovered by Roy, Murphy, Jr., Maier, Gills and Hunt. This method of control chaos is now extended to various medical and biological systems.
NASA Astrophysics Data System (ADS)
Monnier, F.; Vallet, B.; Paparoditis, N.; Papelard, J.-P.; David, N.
2013-10-01
This article presents a generic and efficient method to register terrestrial mobile data with imperfect location on a geographic database with better overall accuracy but less details. The registration method proposed in this paper is based on a semi-rigid point to plane ICP ("Iterative Closest Point"). The main applications of such registration is to improve existing geographic databases, particularly in terms of accuracy, level of detail and diversity of represented objects. Other applications include fine geometric modelling and fine façade texturing, object extraction such as trees, poles, road signs marks, facilities, vehicles, etc. The geopositionning system of mobile mapping systems is affected by GPS masks that are only partially corrected by an Inertial Navigation System (INS) which can cause an important drift. As this drift varies non-linearly, but slowly in time, it will be modelled by a translation defined as a piecewise linear function of time which variation over time will be minimized (rigidity term). For each iteration of the ICP, the drift is estimated in order to minimise the distance between laser points and planar model primitives (data attachment term). The method has been tested on real data (a scan of the city of Paris of 3.6 million laser points registered on a 3D model of approximately 71,400 triangles).
The multigrid preconditioned conjugate gradient method
NASA Technical Reports Server (NTRS)
Tatebe, Osamu
1993-01-01
A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Impedance computed tomography using an adaptive smoothing coefficient algorithm.
Suzuki, A; Uchiyama, A
2001-01-01
In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.
A note on the preconditioner Pm=(I+Sm)
NASA Astrophysics Data System (ADS)
Kohno, Toshiyuki; Niki, Hiroshi
2009-03-01
Kotakemori et al. [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), Journal of Computational and Applied Mathematics 145 (2002) 373-378] have reported that the convergence rate of the iterative method with a preconditioner Pm=(I+Sm) was superior to one of the modified Gauss-Seidel method under the condition. These authors derived a theorem comparing the Gauss-Seidel method with the proposed method. However, through application of a counter example, Wen Li [Wen Li, A note on the preconditioned GaussSeidel (GS) method for linear systems, Journal of Computational and Applied Mathematics 182 (2005) 81-91] pointed out that there exists a special matrix that does not satisfy this comparison theorem. In this note, we analyze the reason why such a to counter example may be produced, and propose a preconditioner to overcome this problem.
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong
2007-01-01
Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.
Positron Emission Mammography with Multiple Angle Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark F. Smith; Stan Majewski; Raymond R. Raylman
2002-11-01
Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FbG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activitymore » concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three-dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.« less
Experimental research of iterated dynamics for the complex exponentials with linear term
NASA Astrophysics Data System (ADS)
Matyushkin, Igor V.; Zapletina, Maria A.
2018-03-01
The research of the orbit of the point zero, fixed points, Julia and Fatou sets for the iterated complex-valued exponential is carried by means of computer experiment. The object of study is three one-parameter families based on exp (iz): f : z → (1 + μ) exp (iz), g : z → (1 + μ|z ‑ z*|) exp (iz), h : z → (1 + μ (z ‑ z*)) exp (iz). Here. For the first family 17- and 2-periodic regimes are detected when passing near the bifurcation value μ ≈ 2.475i, while the multiplicator equals 1. The second family shows a more interesting behavior: (i) three-valley structure of isolines of the convergence rate near fixpoint z* at μ = 0 + 1 + i; (ii) saddle-node transition when the parameter moves along a straight line Reμ = 0, leading to the appearance of a second fixpoint and loss of stability by the old fixpoint at Imμ = 2.1682; (iii) the nontrivial nature of the orbits of points in the vicinity of the new fixpoint and the presence of false fixpoints in the portrait of the Julia set; (iv) second phase transition leading to a radical change in the form of the Julia and Fatou sets at μ ≃ 2.5i. The dynamics of the third family during movement at Reμ = 0 is similar to the first case, but 17th and 2nd periodic modes are replaced by 39th and 3rd modes. Transitions 17 → 2 and 39 → 3 seem to be rapid and discreet while their geometric interpretation matches the ratios 17=1+2*8, 39=13*3. At μ = |z*|‑1 for the h- family Julia set fills the entire complex plane.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1971-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function to the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
A possibilistic approach to clustering
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Keller, James M.
1993-01-01
Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering methods in that total commitment of a vector to a given class is not required at each image pattern recognition iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from the 'Fuzzy C-Means' (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Recently, we cast the clustering problem into the framework of possibility theory using an approach in which the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. We show the ability of this approach to detect linear and quartic curves in the presence of considerable noise.
Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals
NASA Astrophysics Data System (ADS)
Peter, M.; Jafri, S. R. U. N.; Vosselman, G.
2017-09-01
Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Pilot Comparison of Radiance Temperature Scale Realization Between NIMT and NMIJ
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Yamada, Y.; Ishii, J.
2015-03-01
A pilot comparison of radiance temperature scale realizations between the National Institute of Metrology Thailand (NIMT) and the National Metrology Institute of Japan (NMIJ) was conducted. At the two national metrology institutes (NMIs), a 900 nm radiation thermometer, used as the transfer artifact, was calibrated by a means of a multiple fixed-point method using the fixed-point blackbody of Zn, Al, Ag, and Cu points, and by means of relative spectral responsivity measurements according to the International Temperature Scale of 1990 (ITS-90) definition. The Sakuma-Hattori equation is used for interpolating the radiance temperature scale between the four fixed points and also for extrapolating the ITS-90 temperature scale to 2000 C. This paper compares the calibration results in terms of fixed-point measurements, relative spectral responsivity, and finally the radiance temperature scale. Good agreement for the fixed-point measurements was found in case a correction for the change of the internal temperature of the artifact was applied using the temperature coefficient measured at the NMIJ. For the realized radiance temperature range from 400 C to 1100 C, the resulting scale differences between the two NMIs are well within the combined scale comparison uncertainty of 0.12 C (). The resulting spectral responsivity measured at the NIMT has a comparable curve to that measured at the NMIJ especially in the out-of-band region, yielding a ITS-90 scale difference within 1.0 C from the Cu point to 2000 C, whereas the realization comparison uncertainty of NIMT and NMIJ combined is 1.2 C () at 2000 C.
Multi-limit unsymmetrical MLIBD image restoration algorithm
NASA Astrophysics Data System (ADS)
Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen
2012-11-01
A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.
NASA Technical Reports Server (NTRS)
Hendershott, M. C.; Munk, W. H.; Zetler, B. D.
1974-01-01
Two procedures for the evaluation of global tides from SEASAT-A altimetry data are elaborated: an empirical method leading to the response functions for a grid of about 500 points from which the tide can be predicted for any point in the oceans, and a dynamic method which consists of iteratively modifying the parameters in a numerical solution to Laplace tide equations. It is assumed that the shape of the received altimeter signal can be interpreted for sea state and that orbit calculations are available so that absolute sea levels can be obtained.
Hypercuboidal renormalization in spin foam quantum gravity
NASA Astrophysics Data System (ADS)
Bahr, Benjamin; Steinhaus, Sebastian
2017-06-01
In this article, we apply background-independent renormalization group methods to spin foam quantum gravity. It is aimed at extending and elucidating the analysis of a companion paper, in which the existence of a fixed point in the truncated renormalization group flow for the model was reported. Here, we repeat the analysis with various modifications and find that both qualitative and quantitative features of the fixed point are robust in this setting. We also go into details about the various approximation schemes employed in the analysis.
Designing a freeform optic for oblique illumination
NASA Astrophysics Data System (ADS)
Uthoff, Ross D.; Ulanch, Rachel N.; Williams, Kaitlyn E.; Ruiz Diaz, Liliana; King, Page; Koshel, R. John
2017-11-01
The Functional Freeform Fitting (F4) method is utilized to design a freeform optic for oblique illumination of Mark Rothko's Green on Blue (1956). Shown are preliminary results from an iterative freeform design process; from problem definition and specification development to surface fit, ray tracing results, and optimization. This method is applicable to both point and extended sources of various geometries.
SU-E-J-184: Stereo Time-Of-Flight System for Patient Positioning in Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wentz, T; Gilles, M; Visvikis, D
2014-06-01
Purpose: The objective of this work is to test the advantage of using the surface acquired by two stereo Time-of-Flight (ToF) cameras in comparison of the use of one camera only for patient positioning in radiotherapy. Methods: A first step consisted on validating the use of a stereo ToFcamera system for positioning management of a phantom mounted on a linear actuator producing very accurate and repeatable displacements. The displacements between two positions were computed from the surface point cloud acquired by either one or two cameras thanks to an iterative closest point algorithm. A second step consisted on determining themore » displacements on patient datasets, with two cameras fixed on the ceiling of the radiotherapy room. Measurements were done first on voluntary subject with fixed translations, then on patients during the normal clinical radiotherapy routine. Results: The phantom tests showed a major improvement in lateral and depth axis for motions above 10 mm when using the stereo-system instead of a unique camera (Fig1). Patient measurements validate these results with a mean real and measured displacement differences in the depth direction of 1.5 mm when using one camera and 0.9 mm when using two cameras (Fig2). In the lateral direction, a mean difference of 1 mm was obtained by the stereo-system instead of 3.2 mm. Along the longitudinal axis mean differences of 5.4 and 3.4 mm with one and two cameras respectively were noticed but these measurements were still inaccurate and globally underestimated in this direction as in the literature. Similar results were also found for patient subjects with a mean difference reduction of 35%, 7%, and 25% for the lateral, depth, and longitudinal displacement with the stereo-system. Conclusion: The addition of a second ToF-camera to determine patient displacement strongly improved patient repositioning results and therefore insures better radiation delivery.« less
An iterative reconstruction of cosmological initial density fields
NASA Astrophysics Data System (ADS)
Hada, Ryuichiro; Eisenstein, Daniel J.
2018-05-01
We present an iterative method to reconstruct the linear-theory initial conditions from the late-time cosmological matter density field, with the intent of improving the recovery of the cosmic distance scale from the baryon acoustic oscillations (BAOs). We present tests using the dark matter density field in both real and redshift space generated from an N-body simulation. In redshift space at z = 0.5, we find that the reconstructed displacement field using our iterative method are more than 80% correlated with the true displacement field of the dark matter particles on scales k < 0.10h Mpc-1. Furthermore, we show that the two-point correlation function of our reconstructed density field matches that of the initial density field substantially better, especially on small scales (<40h-1 Mpc). Our redshift-space results are improved if we use an anisotropic smoothing so as to account for the reduced small-scale information along the line of sight in redshift space.
Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration
NASA Astrophysics Data System (ADS)
Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.
2017-09-01
One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.
The computational core and fixed point organization in Boolean networks
NASA Astrophysics Data System (ADS)
Correale, L.; Leone, M.; Pagnani, A.; Weigt, M.; Zecchina, R.
2006-03-01
In this paper, we analyse large random Boolean networks in terms of a constraint satisfaction problem. We first develop an algorithmic scheme which allows us to prune simple logical cascades and underdetermined variables, returning thereby the computational core of the network. Second, we apply the cavity method to analyse the number and organization of fixed points. We find in particular a phase transition between an easy and a complex regulatory phase, the latter being characterized by the existence of an exponential number of macroscopically separated fixed point clusters. The different techniques developed are reinterpreted as algorithms for the analysis of single Boolean networks, and they are applied in the analysis of and in silico experiments on the gene regulatory networks of baker's yeast (Saccharomyces cerevisiae) and the segment-polarity genes of the fruitfly Drosophila melanogaster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, J. I.; Henry, J.; Ramos, A. M.
We prove the approximate controllability of several nonlinear parabolic boundary-value problems by means of two different methods: the first one can be called a Cancellation method and the second one uses the Kakutani fixed-point theorem.
Inferring Pre-shock Acoustic Field From Post-shock Pitot Pressure Measurement
NASA Astrophysics Data System (ADS)
Wang, Jian-Xun; Zhang, Chao; Duan, Lian; Xiao, Heng; Virginia Tech Team; Missouri Univ of Sci; Tech Team
2017-11-01
Linear interaction analysis (LIA) and iterative ensemble Kalman method are used to convert post-shock Pitot pressure fluctuations to static pressure fluctuations in front of the shock. The LIA is used as the forward model for the transfer function associated with a homogeneous field of acoustic waves passing through a nominally normal shock wave. The iterative ensemble Kalman method is then employed to infer the spectrum of upstream acoustic waves based on the post-shock Pitot pressure measured at a single point. Several test cases with synthetic and real measurement data are used to demonstrate the merits of the proposed inference scheme. The study provides the basis for measuring tunnel freestream noise with intrusive probes in noisy supersonic wind tunnels.
Is scale-invariance in gauge-Yukawa systems compatible with the graviton?
NASA Astrophysics Data System (ADS)
Christiansen, Nicolai; Eichhorn, Astrid; Held, Aaron
2017-10-01
We explore whether perturbative interacting fixed points in matter systems can persist under the impact of quantum gravity. We first focus on semisimple gauge theories and show that the leading order gravity contribution evaluated within the functional Renormalization Group framework preserves the perturbative fixed-point structure in these models discovered in [J. K. Esbensen, T. A. Ryttov, and F. Sannino, Phys. Rev. D 93, 045009 (2016)., 10.1103/PhysRevD.93.045009]. We highlight that the quantum-gravity contribution alters the scaling dimension of the gauge coupling, such that the system exhibits an effective dimensional reduction. We secondly explore the effect of metric fluctuations on asymptotically safe gauge-Yukawa systems which feature an asymptotically safe fixed point [D. F. Litim and F. Sannino, J. High Energy Phys. 12 (2014) 178., 10.1007/JHEP12(2014)178]. The same effective dimensional reduction that takes effect in pure gauge theories also impacts gauge-Yukawa systems. There, it appears to lead to a split of the degenerate free fixed point into an interacting infrared attractive fixed point and a partially ultraviolet attractive free fixed point. The quantum-gravity induced infrared fixed point moves towards the asymptotically safe fixed point of the matter system, and annihilates it at a critical value of the gravity coupling. Even after that fixed-point annihilation, graviton effects leave behind new partially interacting fixed points for the matter sector.
NASA Astrophysics Data System (ADS)
Milić, Ivan; Atanacković, Olga
2014-10-01
State-of-the-art methods in multidimensional NLTE radiative transfer are based on the use of local approximate lambda operator within either Jacobi or Gauss-Seidel iterative schemes. Here we propose another approach to the solution of 2D NLTE RT problems, Forth-and-Back Implicit Lambda Iteration (FBILI), developed earlier for 1D geometry. In order to present the method and examine its convergence properties we use the well-known instance of the two-level atom line formation with complete frequency redistribution. In the formal solution of the RT equation we employ short characteristics with two-point algorithm. Using an implicit representation of the source function in the computation of the specific intensities, we compute and store the coefficients of the linear relations J=a+bS between the mean intensity J and the corresponding source function S. The use of iteration factors in the ‘local’ coefficients of these implicit relations in two ‘inward’ sweeps of 2D grid, along with the update of the source function in other two ‘outward’ sweeps leads to four times faster solution than the Jacobi’s one. Moreover, the update made in all four consecutive sweeps of the grid leads to an acceleration by a factor of 6-7 compared to the Jacobi iterative scheme.
2004-06-01
lithotripsy applications. Their focusing process can be iterated to work on several point beacons. Other methods compute the angular spectrum, i.e., the...99 [10] Hutton, S., P. 1972 Inaugural Lecture University of Southampton [11] Trevena, D. H. (1984). “Cavitation and the Generation of Tension in
Honda, O; Yanagawa, M; Inoue, A; Kikuyama, A; Yoshida, S; Sumikawa, H; Tobino, K; Koyama, M; Tomiyama, N
2011-01-01
Objective We investigated the image quality of multiplanar reconstruction (MPR) using adaptive statistical iterative reconstruction (ASIR). Methods Inflated and fixed lungs were scanned with a garnet detector CT in high-resolution mode (HR mode) or non-high-resolution (HR) mode, and MPR images were then reconstructed. Observers compared 15 MPR images of ASIR (40%) and ASIR (80%) with those of ASIR (0%), and assessed image quality using a visual five-point scale (1, definitely inferior; 5, definitely superior), with particular emphasis on normal pulmonary structures, artefacts, noise and overall image quality. Results The mean overall image quality scores in HR mode were 3.67 with ASIR (40%) and 4.97 with ASIR (80%). Those in non-HR mode were 3.27 with ASIR (40%) and 3.90 with ASIR (80%). The mean artefact scores in HR mode were 3.13 with ASIR (40%) and 3.63 with ASIR (80%), but those in non-HR mode were 2.87 with ASIR (40%) and 2.53 with ASIR (80%). The mean scores of the other parameters were greater than 3, whereas those in HR mode were higher than those in non-HR mode. There were significant differences between ASIR (40%) and ASIR (80%) in overall image quality (p<0.01). Contrast medium in the injection syringe was scanned to analyse image quality; ASIR did not suppress the severe artefacts of contrast medium. Conclusion In general, MPR image quality with ASIR (80%) was superior to that with ASIR (40%). However, there was an increased incidence of artefacts by ASIR when CT images were obtained in non-HR mode. PMID:21081572
A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints
NASA Technical Reports Server (NTRS)
Arneson, Heather; Bloem, Michael
2009-01-01
A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.
Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator
NASA Astrophysics Data System (ADS)
Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.
2012-09-01
This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
47 CFR 101.137 - Interconnection of private operational fixed point-to-point microwave stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... point-to-point microwave stations. 101.137 Section 101.137 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Technical Standards § 101.137 Interconnection of private operational fixed point-to-point microwave stations. Private...
How nonperturbative is the infrared regime of Landau gauge Yang-Mills correlators?
NASA Astrophysics Data System (ADS)
Reinosa, U.; Serreau, J.; Tissier, M.; Wschebor, N.
2017-07-01
We study the Landau gauge correlators of Yang-Mills fields for infrared Euclidean momenta in the context of a massive extension of the Faddeev-Popov Lagrangian which, we argue, underlies a variety of continuum approaches. Standard (perturbative) renormalization group techniques with a specific, infrared-safe renormalization scheme produce so-called decoupling and scaling solutions for the ghost and gluon propagators, which correspond to nontrivial infrared fixed points. The decoupling fixed point is infrared stable and weakly coupled, while the scaling fixed point is unstable and generically strongly coupled except for low dimensions d →2 . Under the assumption that such a scaling fixed point exists beyond one-loop order, we find that the corresponding ghost and gluon scaling exponents are, respectively, 2 αF=2 -d and 2 αG=d at all orders of perturbation theory in the present renormalization scheme. We discuss the relation between the ghost wave function renormalization, the gluon screening mass, the scale of spectral positivity violation, and the gluon mass parameter. We also show that this scaling solution does not realize the standard Becchi-Rouet-Stora-Tyutin symmetry of the Faddeev-Popov Lagrangian. Finally, we discuss our findings in relation to the results of nonperturbative continuum methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, L.S.; Quintana, V.H.; Vannelli, A.
This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less
Vectorized and multitasked solution of the few-group neutron diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zee, S.K.; Turinsky, P.J.; Shayer, Z.
1989-03-01
A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less
Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2016-06-01
This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Low speed airfoil design and analysis
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1979-01-01
A low speed airfoil design and analysis program was developed which contains several unique features. In the design mode, the velocity distribution is not specified for one but many different angles of attack. Several iteration options are included which allow the trailing edge angle to be specified while other parameters are iterated. For airfoil analysis, a panel method is available which uses third-order panels having parabolic vorticity distributions. The flow condition is satisfied at the end points of the panels. Both sharp and blunt trailing edges can be analyzed. The integral boundary layer method with its laminar separation bubble analog, empirical transition criterion, and precise turbulent boundary layer equations compares very favorably with other methods, both integral and finite difference. Comparisons with experiment for several airfoils over a very wide Reynolds number range are discussed. Applications to high lift airfoil design are also demonstrated.
Blade design and analysis using a modified Euler solver
NASA Technical Reports Server (NTRS)
Leonard, O.; Vandenbraembussche, R. A.
1991-01-01
An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.
Rapid water and lipid imaging with T2 mapping using a radial IDEAL-GRASE technique.
Li, Zhiqiang; Graff, Christian; Gmitro, Arthur F; Squire, Scott W; Bilgin, Ali; Outwater, Eric K; Altbach, Maria I
2009-06-01
Three-point Dixon methods have been investigated as a means to generate water and fat images without the effects of field inhomogeneities. Recently, an iterative algorithm (IDEAL, iterative decomposition of water and fat with echo asymmetry and least squares estimation) was combined with a gradient and spin-echo acquisition strategy (IDEAL-GRASE) to provide a time-efficient method for lipid-water imaging with correction for the effects of field inhomogeneities. The method presented in this work combines IDEAL-GRASE with radial data acquisition. Radial data sampling offers robustness to motion over Cartesian trajectories as well as the possibility of generating high-resolution T(2) maps in addition to the water and fat images. The radial IDEAL-GRASE technique is demonstrated in phantoms and in vivo for various applications including abdominal, pelvic, and cardiac imaging.
NASA Astrophysics Data System (ADS)
Xin, Meiting; Li, Bing; Yan, Xiao; Chen, Lei; Wei, Xiang
2018-02-01
A robust coarse-to-fine registration method based on the backpropagation (BP) neural network and shift window technology is proposed in this study. Specifically, there are three steps: coarse alignment between the model data and measured data, data simplification based on the BP neural network and point reservation in the contour region of point clouds, and fine registration with the reweighted iterative closest point algorithm. In the process of rough alignment, the initial rotation matrix and the translation vector between the two datasets are obtained. After performing subsequent simplification operations, the number of points can be reduced greatly. Therefore, the time and space complexity of the accurate registration can be significantly reduced. The experimental results show that the proposed method improves the computational efficiency without loss of accuracy.
Curvelet-domain multiple matching method combined with cubic B-spline function
NASA Astrophysics Data System (ADS)
Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming
2018-05-01
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.
Numerical method for solving the nonlinear four-point boundary value problems
NASA Astrophysics Data System (ADS)
Lin, Yingzhen; Lin, Jinnan
2010-12-01
In this paper, a new reproducing kernel space is constructed skillfully in order to solve a class of nonlinear four-point boundary value problems. The exact solution of the linear problem can be expressed in the form of series and the approximate solution of the nonlinear problem is given by the iterative formula. Compared with known investigations, the advantages of our method are that the representation of exact solution is obtained in a new reproducing kernel Hilbert space and accuracy of numerical computation is higher. Meanwhile we present the convergent theorem, complexity analysis and error estimation. The performance of the new method is illustrated with several numerical examples.
Rahman, Nafisur; Kashif, Mohammad
2010-03-01
Point and interval hypothesis tests performed to validate two simple and economical, kinetic spectrophotometric methods for the assay of lansoprazole are described. The methods are based on the formation of chelate complex of the drug with Fe(III) and Zn(II). The reaction is followed spectrophotometrically by measuring the rate of change of absorbance of coloured chelates of the drug with Fe(III) and Zn(II) at 445 and 510 nm, respectively. The stoichiometric ratio of lansoprazole to Fe(III) and Zn(II) complexes were found to be 1:1 and 2:1, respectively. The initial-rate and fixed-time methods are adopted for determination of drug concentrations. The calibration graphs are linear in the range 50-200 µg ml⁻¹ (initial-rate method), 20-180 µg ml⁻¹ (fixed-time method) for lansoprazole-Fe(III) complex and 120-300 (initial-rate method), and 90-210 µg ml⁻¹ (fixed-time method) for lansoprazole-Zn(II) complex. The inter-day and intra-day precision data showed good accuracy and precision of the proposed procedure for analysis of lansoprazole. The point and interval hypothesis tests indicate that the proposed procedures are not biased. Copyright © 2010 John Wiley & Sons, Ltd.
a Weighted Closed-Form Solution for Rgb-D Data Registration
NASA Astrophysics Data System (ADS)
Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.
2016-06-01
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.
NASA Astrophysics Data System (ADS)
Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.
1997-08-01
A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.
The Most Remote Point Method for the Site Selection of the Future VGOS Network
NASA Astrophysics Data System (ADS)
Hase, Hayo; Pedreros, Felipe
2014-12-01
The VLBI Global Observing System (VGOS) will be part of the Global Geodetic Observing System (GGOS) and will consist of globally well distributed geodetic observatories. The most remote point (MRP) method is used to identify gaps in the network geometry. In each iteration step the identified most remote points are assumed to become new observatory sites improving the homogeneity of the global network. New locations for VGOS observatories have been found in La Plata, Tahiti, O'Higgins, Galapagos, Colombo, and Syowa. This contribution is an excerpt of a work published in Journal of Geodesy (DOI: 10.1007/s00190-014-0731-y) covering the site selection for the GGOS.%
An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.
Qian, Chunjun; Yang, Xiaoping
2018-01-01
Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden. Copyright © 2017 Elsevier B.V. All rights reserved.
Associative memory in an analog iterated-map neural network
NASA Astrophysics Data System (ADS)
Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.
1990-03-01
The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.
Dielectric response of periodic systems from quantum Monte Carlo calculations.
Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola
2005-11-11
We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.
Information flow in layered networks of non-monotonic units
NASA Astrophysics Data System (ADS)
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
Multiscale Reconstruction for Magnetic Resonance Fingerprinting
Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.
2015-01-01
Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462
Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point
NASA Astrophysics Data System (ADS)
Zabolotnov, Yu. M.; Lobanov, A. A.
2017-05-01
A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).
A Hardware-Accelerated Quantum Monte Carlo framework (HAQMC) for N-body systems
NASA Astrophysics Data System (ADS)
Gothandaraman, Akila; Peterson, Gregory D.; Warren, G. Lee; Hinde, Robert J.; Harrison, Robert J.
2009-12-01
Interest in the study of structural and energetic properties of highly quantum clusters, such as inert gas clusters has motivated the development of a hardware-accelerated framework for Quantum Monte Carlo simulations. In the Quantum Monte Carlo method, the properties of a system of atoms, such as the ground-state energies, are averaged over a number of iterations. Our framework is aimed at accelerating the computations in each iteration of the QMC application by offloading the calculation of properties, namely energy and trial wave function, onto reconfigurable hardware. This gives a user the capability to run simulations for a large number of iterations, thereby reducing the statistical uncertainty in the properties, and for larger clusters. This framework is designed to run on the Cray XD1 high performance reconfigurable computing platform, which exploits the coarse-grained parallelism of the processor along with the fine-grained parallelism of the reconfigurable computing devices available in the form of field-programmable gate arrays. In this paper, we illustrate the functioning of the framework, which can be used to calculate the energies for a model cluster of helium atoms. In addition, we present the capabilities of the framework that allow the user to vary the chemical identities of the simulated atoms. Program summaryProgram title: Hardware Accelerated Quantum Monte Carlo (HAQMC) Catalogue identifier: AEEP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 691 537 No. of bytes in distributed program, including test data, etc.: 5 031 226 Distribution format: tar.gz Programming language: C/C++ for the QMC application, VHDL and Xilinx 8.1 ISE/EDK tools for FPGA design and development Computer: Cray XD1 consisting of a dual-core, dualprocessor AMD Opteron 2.2 GHz with a Xilinx Virtex-4 (V4LX160) or Xilinx Virtex-II Pro (XC2VP50) FPGA per node. We use the compute node with the Xilinx Virtex-4 FPGA Operating system: Red Hat Enterprise Linux OS Has the code been vectorised or parallelized?: Yes Classification: 6.1 Nature of problem: Quantum Monte Carlo is a practical method to solve the Schrödinger equation for large many-body systems and obtain the ground-state properties of such systems. This method involves the sampling of a number of configurations of atoms and averaging the properties of the configurations over a number of iterations. We are interested in applying the QMC method to obtain the energy and other properties of highly quantum clusters, such as inert gas clusters. Solution method: The proposed framework provides a combined hardware-software approach, in which the QMC simulation is performed on the host processor, with the computationally intensive functions such as energy and trial wave function computations mapped onto the field-programmable gate array (FPGA) logic device attached as a co-processor to the host processor. We perform the QMC simulation for a number of iterations as in the case of our original software QMC approach, to reduce the statistical uncertainty of the results. However, our proposed HAQMC framework accelerates each iteration of the simulation, by significantly reducing the time taken to calculate the ground-state properties of the configurations of atoms, thereby accelerating the overall QMC simulation. We provide a generic interpolation framework that can be extended to study a variety of pure and doped atomic clusters, irrespective of the chemical identities of the atoms. For the FPGA implementation of the properties, we use a two-region approach for accurately computing the properties over the entire domain, employ deep pipelines and fixed-point for all our calculations guaranteeing the accuracy required for our simulation.
Wall shear stress fixed points in blood flow
NASA Astrophysics Data System (ADS)
Arzani, Amirhossein; Shadden, Shawn
2017-11-01
Patient-specific computational fluid dynamics produces large datasets, and wall shear stress (WSS) is one of the most important parameters due to its close connection with the biological processes at the wall. While some studies have investigated WSS vectorial features, the WSS fixed points have not received much attention. In this talk, we will discuss the importance of WSS fixed points from three viewpoints. First, we will review how WSS fixed points relate to the flow physics away from the wall. Second, we will discuss how certain types of WSS fixed points lead to high biochemical surface concentration in cardiovascular mass transport problems. Finally, we will introduce a new measure to track the exposure of endothelial cells to WSS fixed points.
Sparse and Adaptive Diffusion Dictionary (SADD) for recovering intra-voxel white matter structure.
Aranda, Ramon; Ramirez-Manzanares, Alonso; Rivera, Mariano
2015-12-01
On the analysis of the Diffusion-Weighted Magnetic Resonance Images, multi-compartment models overcome the limitations of the well-known Diffusion Tensor model for fitting in vivo brain axonal orientations at voxels with fiber crossings, branching, kissing or bifurcations. Some successful multi-compartment methods are based on diffusion dictionaries. The diffusion dictionary-based methods assume that the observed Magnetic Resonance signal at each voxel is a linear combination of the fixed dictionary elements (dictionary atoms). The atoms are fixed along different orientations and diffusivity profiles. In this work, we present a sparse and adaptive diffusion dictionary method based on the Diffusion Basis Functions Model to estimate in vivo brain axonal fiber populations. Our proposal overcomes the following limitations of the diffusion dictionary-based methods: the limited angular resolution and the fixed shapes for the atom set. We propose to iteratively re-estimate the orientations and the diffusivity profile of the atoms independently at each voxel by using a simplified and easier-to-solve mathematical approach. As a result, we improve the fitting of the Diffusion-Weighted Magnetic Resonance signal. The advantages with respect to the former Diffusion Basis Functions method are demonstrated on the synthetic data-set used on the 2012 HARDI Reconstruction Challenge and in vivo human data. We demonstrate that improvements obtained in the intra-voxel fiber structure estimations benefit brain research allowing to obtain better tractography estimations. Hence, these improvements result in an accurate computation of the brain connectivity patterns. Copyright © 2015 Elsevier B.V. All rights reserved.
Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-07-21
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.
A Robust False Matching Points Detection Method for Remote Sensing Image Registration
NASA Astrophysics Data System (ADS)
Shan, X. J.; Tang, P.
2015-04-01
Given the influences of illumination, imaging angle, and geometric distortion, among others, false matching points still occur in all image registration algorithms. Therefore, false matching points detection is an important step in remote sensing image registration. Random Sample Consensus (RANSAC) is typically used to detect false matching points. However, RANSAC method cannot detect all false matching points in some remote sensing images. Therefore, a robust false matching points detection method based on Knearest- neighbour (K-NN) graph (KGD) is proposed in this method to obtain robust and high accuracy result. The KGD method starts with the construction of the K-NN graph in one image. K-NN graph can be first generated for each matching points and its K nearest matching points. Local transformation model for each matching point is then obtained by using its K nearest matching points. The error of each matching point is computed by using its transformation model. Last, L matching points with largest error are identified false matching points and removed. This process is iterative until all errors are smaller than the given threshold. In addition, KGD method can be used in combination with other methods, such as RANSAC. Several remote sensing images with different resolutions and terrains are used in the experiment. We evaluate the performance of KGD method, RANSAC + KGD method, RANSAC, and Graph Transformation Matching (GTM). The experimental results demonstrate the superior performance of the KGD and RANSAC + KGD methods.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
NASA Astrophysics Data System (ADS)
Pokhodun, A. I.; Ivanova, A. G.; Duysebayeva, K. K.; Ivanova, K. P.
2015-01-01
Regional comparison of type S thermocouples at the freezing points of zinc, aluminium and copper was initiated by COOMET TC1.1-10 (the technical committee of COOMET `Thermometry and thermal physics'). Three NMI take part in COOMET regional comparison: D I Mendeleev Institute for Metrology (VNIIM) (Russian Federation), National Scientific Centre (Institute of Metrology) (NSC IM, Ukraine), Republic State Enterprise (Kazakhstan Institute of Metrology) (KazInMetr, Republic of Kazakhstan). VNIIM (Russia) was chosen as the coordinator-pilot of the regional comparison. A star type comparison was used. The participants: KazInMetr and NSC IM constructed the type S thermocouples and calibrated them in three fixed points: zinc, aluminum and copper points, using methods of ITS-90 fixed point realizations. The thermocouples have been sent to VNIIM together with the results of the calibration at three fixed points, with the values of the inhomogeneity at temperature 200 °C and the uncertainty evaluations of the results. For calibration of thermocouples the same VNIIM fixed points cells were used. Participating laboratories repeated the calibration of thermocouples after its returning in zinc, aluminum and copper points to determine the stability of its results. In result of the comparison was to evaluate the equivalence of the type S thermocouples calibration in fixed points by NMIs to confirm corresponding lines of international website for NMI's Calibration and Measurement Capabilities (CMC). This paper is the final report of the comparison including analysis of the uncertainty of measurement results. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT WG-KC, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
NASA Astrophysics Data System (ADS)
Katzav, Eytan
2013-04-01
In this paper, a mode of using the Dynamic Renormalization Group (DRG) method is suggested in order to cope with inconsistent results obtained when applying it to a continuous family of one-dimensional nonlocal models. The key observation is that the correct fixed-point dynamical system has to be identified during the analysis in order to account for all the relevant terms that are generated under renormalization. This is well established for static problems, however poorly implemented in dynamical ones. An application of this approach to a nonlocal extension of the Kardar-Parisi-Zhang equation resolves certain problems in one-dimension. Namely, obviously problematic predictions are eliminated and the existing exact analytic results are recovered.
Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model
NASA Astrophysics Data System (ADS)
Zhua, Ningning; Jiaa, Yonghong; Luo, Lun
2016-06-01
The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Novel switching method for single-phase NPC three-level inverter with neutral-point voltage control
NASA Astrophysics Data System (ADS)
Lee, June-Seok; Lee, Seung-Joo; Lee, Kyo-Beum
2018-02-01
This paper proposes a novel switching method with the neutral-point voltage control in a single-phase neutral-point-clamped three-level inverter (SP-NPCI) used in photovoltaic systems. A proposed novel switching method for the SP-NPCI improves the efficiency. The main concept is to fix the switching state of one leg. As a result, the switching loss decreases and the total efficiency is improved. In addition, it enables the maximum power-point-tracking operation to be performed by applying the proposed neutral-point voltage control algorithm. This control is implemented by modifying the reference signal. Simulation and experimental results provide verification of the performance of a novel switching method with the neutral-point voltage control.
Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions
NASA Astrophysics Data System (ADS)
Hussain, N.
2008-02-01
The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.
NASA Technical Reports Server (NTRS)
Harp, J. L., Jr.
1977-01-01
A two-dimensional time-dependent computer code was utilized to calculate the three-dimensional steady flow within the impeller blading. The numerical method is an explicit time marching scheme in two spatial dimensions. Initially, an inviscid solution is generated on the hub blade-to-blade surface by the method of Katsanis and McNally (1973). Starting with the known inviscid solution, the viscous effects are calculated through iteration. The approach makes it possible to take into account principal impeller fluid-mechanical effects. It is pointed out that the second iterate provides a complete solution to the three-dimensional, compressible, Navier-Stokes equations for flow in a centrifugal impeller. The problems investigated are related to the study of a radial impeller and a backswept impeller.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
Applying Evolutionary Prototyping In Developing LMIS: A Spatial Web-Based System For Land Management
NASA Astrophysics Data System (ADS)
Agustiono, W.
2018-01-01
Software development project is a difficult task. Especially for software designed to comply with regulations that are constantly being introduced or changed, it is almost impossible to make just one change during the development process. Even if it is possible, nonetheless, the developers may take bulk of works to fix the design to meet specified needs. This iterative work also means that it takes additional time and potentially leads to failing to meet the original schedule and budget. In such inevitable changes, it is essential for developers to carefully consider and use an appropriate method which will help them carry out software project development. This research aims to examine the implementation of a software development method called evolutionary prototyping for developing software for complying regulation. It investigates the development of Land Management Information System (pseudonym), initiated by the Australian government, for use by farmers to meet regulatory demand requested by Soil and Land Conservation Act. By doing so, it sought to provide understanding the efficacy of evolutionary prototyping in helping developers address frequent changing requirements and iterative works but still within schedule. The findings also offer useful practical insights for other developers who seek to build similar regulatory compliance software.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
Miniature Fixed Points as Temperature Standards for In Situ Calibration of Temperature Sensors
NASA Astrophysics Data System (ADS)
Hao, X. P.; Sun, J. P.; Xu, C. Y.; Wen, P.; Song, J.; Xu, M.; Gong, L. Y.; Ding, L.; Liu, Z. L.
2017-06-01
Miniature Ga and Ga-In alloy fixed points as temperature standards are developed at National Institute of Metrology, China for the in situ calibration of temperature sensors. A quasi-adiabatic vacuum measurement system is constructed to study the phase-change plateaus of the fixed points. The system comprises a high-stability bath, a quasi-adiabatic vacuum chamber and a temperature control and measurement system. The melting plateau of the Ga fixed point is longer than 2 h at 0.008 W. The standard deviation of the melting temperature of the Ga and Ga-In alloy fixed points is better than 2 mK. The results suggest that the melting temperature of the Ga or Ga-In alloy fixed points is linearly related with the heating power.
Wall shear stress fixed points in cardiovascular fluid mechanics.
Arzani, Amirhossein; Shadden, Shawn C
2018-05-17
Complex blood flow in large arteries creates rich wall shear stress (WSS) vectorial features. WSS acts as a link between blood flow dynamics and the biology of various cardiovascular diseases. WSS has been of great interest in a wide range of studies and has been the most popular measure to correlate blood flow to cardiovascular disease. Recent studies have emphasized different vectorial features of WSS. However, fixed points in the WSS vector field have not received much attention. A WSS fixed point is a point on the vessel wall where the WSS vector vanishes. In this article, WSS fixed points are classified and the aspects by which they could influence cardiovascular disease are reviewed. First, the connection between WSS fixed points and the flow topology away from the vessel wall is discussed. Second, the potential role of time-averaged WSS fixed points in biochemical mass transport is demonstrated using the recent concept of Lagrangian WSS structures. Finally, simple measures are proposed to quantify the exposure of the endothelial cells to WSS fixed points. Examples from various arterial flow applications are demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tympanic thermometer performance validation by use of a body-temperature fixed point blackbody
NASA Astrophysics Data System (ADS)
Machin, Graham; Simpson, Robert
2003-04-01
The use of infrared tympanic thermometers within the medical community (and more generically in the public domain) has recently grown rapidly, displacing more traditional forms of thermometry such as mercury-in-glass. Besides the obvious health concerns over mercury the increase in the use of tympanic thermometers is related to a number of factors such as their speed and relatively non-invasive method of operation. The calibration and testing of such devices is covered by a number of international standards (ASTM1, prEN2, JIS3) which specify the design of calibration blackbodies. However these calibration sources are impractical for day-to-day in-situ validation purposes. In addition several studies (e.g. Modell et al4, Craig et al5) have thrown doubt on the accuracy of tympanic thermometers in clinical use. With this in mind the NPL is developing a practical, portable and robust primary reference fixed point source for tympanic thermometer validation. The aim of this simple device is to give the clinician a rapid way of validating the performance of their tympanic thermometer, enabling the detection of mal-functioning thermometers and giving confidence in the measurement to the clinician (and patient!) at point of use. The reference fixed point operates at a temperature of 36.3 °C (97.3 °F) with a repeatability of approximately +/- 20 mK. The fixed-point design has taken into consideration the optical characteristics of tympanic thermometers enabling wide-angled field of view devices to be successfully tested. The overall uncertainty of the device is estimated to be is less than 0.1°C. The paper gives a description of the fixed point, its design and construction as well as the results to date of validation tests.
A minimal approach to the scattering of physical massless bosons
NASA Astrophysics Data System (ADS)
Boels, Rutger H.; Luo, Hui
2018-05-01
Tree and loop level scattering amplitudes which involve physical massless bosons are derived directly from physical constraints such as locality, symmetry and unitarity, bypassing path integral constructions. Amplitudes can be projected onto a minimal basis of kinematic factors through linear algebra, by employing four dimensional spinor helicity methods or at its most general using projection techniques. The linear algebra analysis is closely related to amplitude relations, especially the Bern-Carrasco-Johansson relations for gluon amplitudes and the Kawai-Lewellen-Tye relations between gluons and graviton amplitudes. Projection techniques are known to reduce the computation of loop amplitudes with spinning particles to scalar integrals. Unitarity, locality and integration-by-parts identities can then be used to fix complete tree and loop amplitudes efficiently. The loop amplitudes follow algorithmically from the trees. A number of proof-of-concept examples are presented. These include the planar four point two-loop amplitude in pure Yang-Mills theory as well as a range of one loop amplitudes with internal and external scalars, gluons and gravitons. Several interesting features of the results are highlighted, such as the vanishing of certain basis coefficients for gluon and graviton amplitudes. Effective field theories are naturally and efficiently included into the framework. Dimensional regularisation is employed throughout; different regularisation schemes are worked out explicitly. The presented methods appear most powerful in non-supersymmetric theories in cases with relatively few legs, but with potentially many loops. For instance, in the introduced approach iterated unitarity cuts of four point amplitudes for non-supersymmetric gauge and gravity theories can be computed by matrix multiplication, generalising the so-called rung-rule of maximally supersymmetric theories. The philosophy of the approach to kinematics also leads to a technique to control colour quantum numbers of scattering amplitudes with matter, especially efficient in the adjoint and fundamental representations.
Options for Robust Airfoil Optimization under Uncertainty
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Li, Wu
2002-01-01
A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.
NASA Astrophysics Data System (ADS)
Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.
2013-09-01
Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.
Investigations on Two Co-C Fixed-Point Cells Prepared at INRIM and LNE-Cnam
NASA Astrophysics Data System (ADS)
Battuello, M.; Florio, M.; Sadli, M.; Bourson, F.
2011-08-01
INRIM and LNE-Cnam agreed to undertake a collaboration aimed to verify, through the use of metal-carbon eutectic fixed-point cells, methods and facilities used for defining the transition temperature of eutectic fixed points and manufacturing procedures of cells. To this purpose and as a first step of the cooperation, a Co-C cell manufactured at LNE-Cnam was measured at INRIM and compared with a local cell. The two cells were of different designs: the INRIM cell of 10 cm3 inner volume and the LNE-Cnam one of 3.9 cm3. The external dimensions of the two cells were noticeably different, namely, 40 mm in length and 24 mm in diameter for the LNE-Cnam cell 3Co4 and 110 mm in length and 42 mm in diameter for the INRIM cell. Consequently, the investigation of the effect of temperature distributions in the heating furnace was undertaken by implementing the cells inside single-zone and three-zone furnaces. The transition temperature of the cell was determined at the two institutes making use of different techniques: at INRIM radiation scales at 900 nm, 950 nm, and 1.6 μm were realized from In to Cu and then used to define T 90(Co-C) by extrapolation. At LNE-Cnam, a radiance comparator based on a grating monochromator was used for the extrapolation from the Cu fixed point. This paper presents a comparative description of the cells and the manufacturing methods and the results in terms of equivalence between the two cells and melting temperatures determined at INRIM and LNE-Cnam.
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
47 CFR 101.21 - Technical content of applications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.21 Technical... Private Operational Fixed Point-to-Point Microwave Service and the Common Carrier Fixed Point-to-Point Microwave Service must include the following information: Applicant's name and address. Transmitting station...
47 CFR 101.21 - Technical content of applications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.21 Technical... Private Operational Fixed Point-to-Point Microwave Service and the Common Carrier Fixed Point-to-Point Microwave Service must include the following information: Applicant's name and address. Transmitting station...
47 CFR 101.107 - Frequency tolerance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE... to private operational fixed point-to-point microwave and stations providing MVDDS. 5 For private operational fixed point-to-point microwave systems, with a channel greater than or equal to 50 KHz bandwidth...
47 CFR 101.101 - Frequency availability.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE...—(Part 78) CC: Common Carrier Fixed Point-to-Point Microwave Service—(Part 101, Subparts C & I) DBS... Distribution Service—(Part 21) OFS: Private Operational Fixed Point-to-Point Microwave Service—(Part 101...
Seeking fixed points in multiple coupling scalar theories in the ɛ expansion
NASA Astrophysics Data System (ADS)
Osborn, Hugh; Stergiou, Andreas
2018-05-01
Fixed points for scalar theories in 4 - ɛ, 6 - ɛ and 3 - ɛ dimensions are discussed. It is shown how a large range of known fixed points for the four dimensional case can be obtained by using a general framework with two couplings. The original maximal symmetry, O( N), is broken to various subgroups, both discrete and continuous. A similar discussion is applied to the six dimensional case. Perturbative applications of the a-theorem are used to help classify potential fixed points. At lowest order in the ɛ-expansion it is shown that at fixed points there is a lower bound for a which is saturated at bifurcation points.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
Arc detection for the ICRF system on ITER
NASA Astrophysics Data System (ADS)
D'Inca, R.
2011-12-01
The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Experimental determination of material damping using vibration analyzer
NASA Technical Reports Server (NTRS)
Chowdhury, Mostafiz R.; Chowdhury, Farida
1990-01-01
Structural damping is an important dynamic characteristic of engineering materials that helps to damp vibrations by reducing their amplitudes. In this investigation, an experimental method is illustrated to determine the damping characteristics of engineering materials using a dual channel Fast Fourier Transform (FFT) analyzer. A portable Compaq III computer which houses the analyzer, is used to collect the dynamic responses of three metal rods. Time-domain information is analyzed to obtain the logarithmic decrement of their damping. The damping coefficients are then compared to determine the variation of damping from material to material. The variations of damping from one point to another of the same material, due to a fixed point excitation, and the variable damping at a fixed point due to excitation at different points, are also demonstrated.
Variable frequency iteration MPPT for resonant power converters
Zhang, Qian; Bataresh, Issa; Chen, Lin
2015-06-30
A method of maximum power point tracking (MPPT) uses an MPPT algorithm to determine a switching frequency for a resonant power converter, including initializing by setting an initial boundary frequency range that is divided into initial frequency sub-ranges bounded by initial frequencies including an initial center frequency and first and second initial bounding frequencies. A first iteration includes measuring initial powers at the initial frequencies to determine a maximum power initial frequency that is used to set a first reduced frequency search range centered or bounded by the maximum power initial frequency including at least a first additional bounding frequency. A second iteration includes calculating first and second center frequencies by averaging adjacent frequent values in the first reduced frequency search range and measuring second power values at the first and second center frequencies. The switching frequency is determined from measured power values including the second power values.
Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data.
van Maanen, Leendert; Couto, Joaquina; Lebreton, Mael
2016-01-01
The notion of "mixtures" has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied-for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes.
Three Boundary Conditions for Computing the Fixed-Point Property in Binary Mixture Data
Couto, Joaquina; Lebreton, Mael
2016-01-01
The notion of “mixtures” has become pervasive in behavioral and cognitive sciences, due to the success of dual-process theories of cognition. However, providing support for such dual-process theories is not trivial, as it crucially requires properties in the data that are specific to mixture of cognitive processes. In theory, one such property could be the fixed-point property of binary mixture data, applied–for instance- to response times. In that case, the fixed-point property entails that response time distributions obtained in an experiment in which the mixture proportion is manipulated would have a common density point. In the current article, we discuss the application of the fixed-point property and identify three boundary conditions under which the fixed-point property will not be interpretable. In Boundary condition 1, a finding in support of the fixed-point will be mute because of a lack of difference between conditions. Boundary condition 2 refers to the case in which the extreme conditions are so different that a mixture may display bimodality. In this case, a mixture hypothesis is clearly supported, yet the fixed-point may not be found. In Boundary condition 3 the fixed-point may also not be present, yet a mixture might still exist but is occluded due to additional changes in behavior. Finding the fixed-property provides strong support for a dual-process account, yet the boundary conditions that we identify should be considered before making inferences about underlying psychological processes. PMID:27893868
47 CFR 101.21 - Technical content of applications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.21 Technical...) [Reserved] (e) Each application in the Private Operational Fixed Point-to-Point Microwave Service and the Common Carrier Fixed Point-to-Point Microwave Service must include the following information: Applicant's...
47 CFR 101.5 - Station authorization required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.5 Station... stations authorized under subpart H (Private Operational Fixed Point-to-Point Microwave Service), subpart I (Common Carrier Fixed Point-to-Point Microwave Service), and subpart L of this part (Local Multipoint...
47 CFR 101.5 - Station authorization required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.5 Station... stations authorized under subpart H (Private Operational Fixed Point-to-Point Microwave Service), subpart I (Common Carrier Fixed Point-to-Point Microwave Service), and subpart L of this part (Local Multipoint...
47 CFR 101.21 - Technical content of applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.21 Technical...) [Reserved] (e) Each application in the Private Operational Fixed Point-to-Point Microwave Service and the Common Carrier Fixed Point-to-Point Microwave Service must include the following information: Applicant's...
47 CFR 101.21 - Technical content of applications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.21 Technical...) [Reserved] (e) Each application in the Private Operational Fixed Point-to-Point Microwave Service and the Common Carrier Fixed Point-to-Point Microwave Service must include the following information: Applicant's...
47 CFR 101.5 - Station authorization required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.5 Station... stations authorized under subpart H (Private Operational Fixed Point-to-Point Microwave Service), subpart I (Common Carrier Fixed Point-to-Point Microwave Service), and subpart L of this part (Local Multipoint...
47 CFR 101.5 - Station authorization required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.5 Station... stations authorized under subpart H (Private Operational Fixed Point-to-Point Microwave Service), subpart I (Common Carrier Fixed Point-to-Point Microwave Service), and subpart L of this part (Local Multipoint...
47 CFR 101.5 - Station authorization required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... SERVICES FIXED MICROWAVE SERVICES Applications and Licenses General Filing Requirements § 101.5 Station... stations authorized under subpart H (Private Operational Fixed Point-to-Point Microwave Service), subpart I (Common Carrier Fixed Point-to-Point Microwave Service), and subpart L of this part (Local Multipoint...
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
A Fast Implementation of the ISOCLUS Algorithm
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline
2003-01-01
Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O(kn) time, where k denotes the current number of centers. Traditional techniques for accelerating nearest neighbor searching involve storing the k centers in a data structure. However, because of the iterative nature of the algorithm, this data structure would need to be rebuilt with each new iteration. Our approach is to store the data points in a kd-tree data structure. The assignment of points to nearest neighbors is carried out by a filtering process, which successively eliminates centers that can not possibly be the nearest neighbor for a given region of space. This algorithm is significantly faster, because large groups of data points can be assigned to their nearest center in a single operation. Preliminary results on a number of real Landsat datasets show that our revised ISOCLUS-like scheme runs about twice as fast.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target.
Yin, Fang; Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-03-28
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method.
Sparse Unorganized Point Cloud Based Relative Pose Estimation for Uncooperative Space Target
Chou, Wusheng; Wu, Yun; Yang, Guang; Xu, Song
2018-01-01
This paper proposes an autonomous algorithm to determine the relative pose between the chaser spacecraft and the uncooperative space target, which is essential in advanced space applications, e.g., on-orbit serving missions. The proposed method, named Congruent Tetrahedron Align (CTA) algorithm, uses the very sparse unorganized 3D point cloud acquired by a LIDAR sensor, and does not require any prior pose information. The core of the method is to determine the relative pose by looking for the congruent tetrahedron in scanning point cloud and model point cloud on the basis of its known model. The two-level index hash table is built for speeding up the search speed. In addition, the Iterative Closest Point (ICP) algorithm is used for pose tracking after CTA. In order to evaluate the method in arbitrary initial attitude, a simulated system is presented. Specifically, the performance of the proposed method to provide the initial pose needed for the tracking algorithm is demonstrated, as well as their robustness against noise. Finally, a field experiment is conducted and the results demonstrated the effectiveness of the proposed method. PMID:29597323
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.
Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet
2018-04-23
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm
Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A.; Ravankar, Abhijeet
2018-01-01
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. PMID:29690624
NASA Astrophysics Data System (ADS)
Battuello, M.; Girard, F.; Florio, M.
2009-02-01
Four independent radiation temperature scales approximating the ITS-90 at 900 nm, 950 nm and 1.6 µm have been realized from the indium point (429.7485 K) to the copper point (1357.77 K) which were used to derive by extrapolation the transition temperature T90(Co-C) of the cobalt-carbon eutectic fixed point. An INRIM cell was investigated and an average value T90(Co-C) = 1597.20 K was found with the four values lying within 0.25 K. Alternatively, thermodynamic approximated scales were realized by assigning to the fixed points the best presently available thermodynamic values and deriving T(Co-C). An average value of 1597.27 K was found (four values lying within 0.25 K). The standard uncertainties associated with T90(Co-C) and T(Co-C) were 0.16 K and 0.17 K, respectively. INRIM determinations are compatible with recent thermodynamic determinations on three different cells (values lying between 1597.11 K and 1597.25 K) and with the result of a comparison on the same cell by an absolute radiation thermometer and an irradiance measurement with filter radiometers which give values of 1597.11 K and 1597.43 K, respectively (Anhalt et al 2006 Metrologia 43 S78-83). The INRIM approach allows the determination of both ITS-90 and thermodynamic temperature of a fixed point in a simple way and can provide valuable support to absolute radiometric methods in defining the transition temperature of new high-temperature fixed points.
Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.
2018-03-01
Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.
Bardhan, Jaydeep P
2008-10-14
The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.
NASA Astrophysics Data System (ADS)
Péli, Zoltán; Nagy, Sándor; Sailer, Kornel
2018-02-01
The effect of the O(partial4) terms of the gradient expansion on the anomalous dimension η and the correlation length's critical exponent ν of the Wilson-Fisher fixed point has been determined for the Euclidean 3-dimensional O( N) models with N≥ 2 . Wetterich's effective average action renormalization group method is used with field-independent derivative couplings and Litim's optimized regulator. It is shown that the critical theory is well approximated by the effective average action preserving O( N) symmetry with an accuracy of O(η).
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
NASA Astrophysics Data System (ADS)
MacDonald, Christopher L.; Bhattacharya, Nirupama; Sprouse, Brian P.; Silva, Gabriel A.
2015-09-01
Computing numerical solutions to fractional differential equations can be computationally intensive due to the effect of non-local derivatives in which all previous time points contribute to the current iteration. In general, numerical approaches that depend on truncating part of the system history while efficient, can suffer from high degrees of error and inaccuracy. Here we present an adaptive time step memory method for smooth functions applied to the Grünwald-Letnikov fractional diffusion derivative. This method is computationally efficient and results in smaller errors during numerical simulations. Sampled points along the system's history at progressively longer intervals are assumed to reflect the values of neighboring time points. By including progressively fewer points backward in time, a temporally 'weighted' history is computed that includes contributions from the entire past of the system, maintaining accuracy, but with fewer points actually calculated, greatly improving computational efficiency.
Stitching-aware in-design DPT auto fixing for sub-20nm logic devices
NASA Astrophysics Data System (ADS)
Choi, Soo-Han; Sai Krishna, K. V. V. S.; Pemberton-Smith, David
2017-03-01
As the technology continues to shrink below 20nm, Double Patterning Technology (DPT) becomes one of the mandatory solutions for routing metal layers. From the view point of Place and Route (P&R), the major concerns are how to prevent DPT odd-cycles automatically without sacrificing chip area. Even though the leading-edge P&R tools have advanced algorithms to prevent DPT odd-cycles, it is very hard to prevent the localized DPT odd-cycles, especially in Engineering Change Order (ECO) routing. In the last several years, we developed In-design DPT Auto Fixing method in order to reduce localized DPT odd-cycles significantly during ECO and could achieve remarkable design Turn-Around Times (TATs). But subsequently, as the design complexity continued increasing and chip size continued decreasing, we needed a new In-design DPT Auto Fixing approach to improve the auto. fixing rate. In this paper, we present the Stitching-Aware In-design DPT Auto Fixing method for better fixing rates and smaller chip design. The previous In-design DPT Auto Fixing method detected all DPT odd-cycles and tried to remove oddcycles by increasing the adjacent space. As the metal congestions increase in the newer technology nodes, the older Auto Fixing method has limitations to increase the adjacent space between routing metals. Consequently, the auto fixing rate of older method gets worse with the introduction of the smaller design rules. With DPT stitching enablement at In-design DRC checking procedure, the new Stitching-Aware DPT Auto Fixing method detects the most critical odd-cycles and revolve the odd-cycles automatically. The accuracy of new flow ensures better usage of space in the congested areas, and helps design more smaller chips. By applying the Stitching-Aware DPT Auto Fixing method to sub-20nm logic devices, we can confirm that the auto fixing rate is improved by 2X compared with auto fixing without stitching. Additionally, by developing the better heuristic algorithm and flow for DPT stitching, we can get DPT compliant layout with the acceptable design TATs.
Chiral Modes at Exceptional Points in Exciton-Polariton Quantum Fluids
NASA Astrophysics Data System (ADS)
Gao, T.; Li, G.; Estrecho, E.; Liew, T. C. H.; Comber-Todd, D.; Nalitov, A.; Steger, M.; West, K.; Pfeiffer, L.; Snoke, D. W.; Kavokin, A. V.; Truscott, A. G.; Ostrovskaya, E. A.
2018-02-01
We demonstrate the generation of chiral modes-vortex flows with fixed handedness in exciton-polariton quantum fluids. The chiral modes arise in the vicinity of exceptional points (non-Hermitian spectral degeneracies) in an optically induced resonator for exciton polaritons. In particular, a vortex is generated by driving two dipole modes of the non-Hermitian ring resonator into degeneracy. Transition through the exceptional point in the space of the system's parameters is enabled by precise manipulation of real and imaginary parts of the closed-wall potential forming the resonator. As the system is driven to the vicinity of the exceptional point, we observe the formation of a vortex state with a fixed orbital angular momentum (topological charge). This method can be extended to generate higher-order orbital angular momentum states through coalescence of multiple non-Hermitian spectral degeneracies. Our Letter demonstrates the possibility of exploiting nontrivial and counterintuitive properties of waves near exceptional points in macroscopic quantum systems.
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...
2018-01-01
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
NASA Astrophysics Data System (ADS)
Ivanyi, P.; Ivanyi, A.
2015-02-01
In this paper one column of a telescopic construction of a bell tower is investigated. The hinges at the support of the column and at the connecting joint between the upper and lower columns are modelled with rotational springs. The characteristics of the springs are assumed to be non-linear and the hysteresis property of them is represented with the Preisach hysteresis model. The mass of the columns and the bell with the fly are concentrated to the top of the column. The tolling process is simulated with a cycling load. The elements of the column are considered completely rigid. The time iteration of the non-linear equations of the motion is evaluated by the Crank-Nicolson schema and the implemented non-linear hysteresis is handled by the fix-point technique. The numerical simulation of the dynamic system is carried out under different combination of soft, medium and hard hysteresis properties of hinges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano
This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less
A New Metrics for Countries' Fitness and Products' Complexity
NASA Astrophysics Data System (ADS)
Tacchella, Andrea; Cristelli, Matthieu; Caldarelli, Guido; Gabrielli, Andrea; Pietronero, Luciano
2012-10-01
Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.
Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing
NASA Astrophysics Data System (ADS)
Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng
2017-05-01
Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.
Identification of boiler inlet transfer functions and estimation of system parameters
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
An iterative computer method is described for identifying boiler transfer functions using frequency response data. An objective penalized performance measure and a nonlinear minimization technique are used to cause the locus of points generated by a transfer function to resemble the locus of points obtained from frequency response measurements. Different transfer functions can be tried until a satisfactory empirical transfer function of the system is found. To illustrate the method, some examples and some results from a study of a set of data consisting of measurements of the inlet impedance of a single tube forced flow boiler with inserts are given.
Numerical simulations of microwave heating of liquids: enhancements using Krylov subspace methods
NASA Astrophysics Data System (ADS)
Lollchund, M. R.; Dookhitram, K.; Sunhaloo, M. S.; Boojhawon, R.
2013-04-01
In this paper, we compare the performances of three iterative solvers for large sparse linear systems arising in the numerical computations of incompressible Navier-Stokes (NS) equations. These equations are employed mainly in the simulation of microwave heating of liquids. The emphasis of this work is on the application of Krylov projection techniques such as Generalized Minimal Residual (GMRES) to solve the Pressure Poisson Equations that result from discretisation of the NS equations. The performance of the GMRES method is compared with the traditional Gauss-Seidel (GS) and point successive over relaxation (PSOR) techniques through their application to simulate the dynamics of water housed inside a vertical cylindrical vessel which is subjected to microwave radiation. It is found that as the mesh size increases, GMRES gives the fastest convergence rate in terms of computational times and number of iterations.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
NASA Astrophysics Data System (ADS)
Xu, Jiayuan; Yu, Chengtao; Bo, Bin; Xue, Yu; Xu, Changfu; Chaminda, P. R. Dushantha; Hu, Chengbo; Peng, Kai
2018-03-01
The automatic recognition of the high voltage isolation switch by remote video monitoring is an effective means to ensure the safety of the personnel and the equipment. The existing methods mainly include two ways: improving monitoring accuracy and adopting target detection technology through equipment transformation. Such a method is often applied to specific scenarios, with limited application scope and high cost. To solve this problem, a high voltage isolation switch state recognition method based on background difference and iterative search is proposed in this paper. The initial position of the switch is detected in real time through the background difference method. When the switch starts to open and close, the target tracking algorithm is used to track the motion trajectory of the switch. The opening and closing state of the switch is determined according to the angle variation of the switch tracking point and the center line. The effectiveness of the method is verified by experiments on different switched video frames of switching states. Compared with the traditional methods, this method is more robust and effective.
Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation
NASA Astrophysics Data System (ADS)
Jamelot, Erell; Ciarlet, Patrick
2013-05-01
Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
Solution of effective Hamiltonian of impurity hopping between two sites in a metal
NASA Astrophysics Data System (ADS)
Ye, Jinwu
1998-03-01
We analyze in detail all the possible fixed points of the effective Hamiltonian of a non-magnetic impurity hopping between two sites in a metal obtained by Moustakas and Fisher(MF). We find a line of non-fermi liquid fixed points which continuously interpolates between the 2-channel Kondo fixed point(2CK) and the one channel, two impurity Kondo (2IK) fixed point. There is one relevant direction with scaling dimension 1/2 and one leading irrelevant operator with dimension 3/2. There is also one marginal operator in the spin sector moving along this line. The additional non-fermi liquid fixed point found by MF has the same symmetry as the 2IK, it has two relevant directions with scaling dimension 1/2, therefore also unstable. The system is shown to flow to a line of fermi-liquid fixed points which continuously interpolates between the non-interacting fixed point and the 2 channel spin-flavor Kondo fixed point (2CSFK) discussed by the author previously. The effect of particle-hole symmetry breaking is discussed. The effective Hamiltonian in the external magnetic field is analysed. The scaling functions for the physical measurable quantities are derived in the different regimes; their predictions for the experiments are given. Finally the implications are given for a non-magnetic impurity hopping around three sites with triangular symmetry discussed by MF.
Investigation on the pinch point position in heat exchangers
NASA Astrophysics Data System (ADS)
Pan, Lisheng; Shi, Weixiu
2016-06-01
The pinch point is important for analyzing heat transfer in thermodynamic cycles. With the aim to reveal the importance of determining the accurate pinch point, the research on the pinch point position is carried out by theoretical method. The results show that the pinch point position depends on the parameters of the heat transfer fluids and the major fluid properties. In most cases, the pinch point locates at the bubble point for the evaporator and the dew point for the condenser. However, the pinch point shifts to the supercooled liquid state in the near critical conditions for the evaporator. Similarly, it shifts to the superheated vapor state with the condensing temperature approaching the critical temperature for the condenser. It even can shift to the working fluid entrance of the evaporator or the supercritical heater when the heat source fluid temperature is very high compared with the absorbing heat temperature. A wrong position for the pinch point may generate serious mistake. In brief, the pinch point should be founded by the iterative method in all conditions rather than taking for granted.
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
Fast Appearance Modeling for Automatic Primary Video Object Segmentation.
Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong
2016-02-01
Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.
Strehl-constrained iterative blind deconvolution for post-adaptive-optics data
NASA Astrophysics Data System (ADS)
Desiderà, G.; Carbillet, M.
2009-12-01
Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.
NASA Technical Reports Server (NTRS)
1972-01-01
The QL module of the Performance Analysis and Design Synthesis (PADS) computer program is described. Execution of this module is initiated when and if subroutine PADSI calls subroutine GROPE. Subroutine GROPE controls the high level logical flow of the QL module. The purpose of the module is to determine a trajectory that satisfies the necessary variational conditions for optimal performance. The module achieves this by solving a nonlinear multi-point boundary value problem. The numerical method employed is described. It is an iterative technique that converges quadratically when it does converge. The three basic steps of the module are: (1) initialization, (2) iteration, and (3) culmination. For Volume 1 see N73-13199.
Floquet stability analysis of the longitudinal dynamics of two hovering model insects
Wu, Jiang Hao; Sun, Mao
2012-01-01
Because of the periodically varying aerodynamic and inertial forces of the flapping wings, a hovering or constant-speed flying insect is a cyclically forcing system, and, generally, the flight is not in a fixed-point equilibrium, but in a cyclic-motion equilibrium. Current stability theory of insect flight is based on the averaged model and treats the flight as a fixed-point equilibrium. In the present study, we treated the flight as a cyclic-motion equilibrium and used the Floquet theory to analyse the longitudinal stability of insect flight. Two hovering model insects were considered—a dronefly and a hawkmoth. The former had relatively high wingbeat frequency and small wing-mass to body-mass ratio, and hence very small amplitude of body oscillation; while the latter had relatively low wingbeat frequency and large wing-mass to body-mass ratio, and hence relatively large amplitude of body oscillation. For comparison, analysis using the averaged-model theory (fixed-point stability analysis) was also made. Results of both the cyclic-motion stability analysis and the fixed-point stability analysis were tested by numerical simulation using complete equations of motion coupled with the Navier–Stokes equations. The Floquet theory (cyclic-motion stability analysis) agreed well with the simulation for both the model dronefly and the model hawkmoth; but the averaged-model theory gave good results only for the dronefly. Thus, for an insect with relatively large body oscillation at wingbeat frequency, cyclic-motion stability analysis is required, and for their control analysis, the existing well-developed control theories for systems of fixed-point equilibrium are no longer applicable and new methods that take the cyclic variation of the flight dynamics into account are needed. PMID:22491980
NASA Astrophysics Data System (ADS)
Yamamoto, K.; Smith, MC
2016-09-01
This paper studies the problem of passive control of a multi-storey building subjected to an earthquake disturbance. The building is represented as a homogeneous mass chain model, i.e., a chain of identical masses in which there is an identical passive connection between neighbouring masses and a similar connection to a movable point. The paper considers passive interconnections of the most general type, which may require the use of inerters in addition to springs and dampers. It is shown that the scalar transfer functions from the disturbance to a given inter-storey drift can be represented as complex iterative maps. Using these expressions, two graphical approaches are proposed: one gives a method to achieve a prescribed value for the uniform boundedness of these transfer functions independent of the length of the mass chain, and the other is for a fixed length of the mass chain. A case study is presented to demonstrate the effectiveness of the proposed techniques using a 10-storey building model. The disturbance suppression performance of the designed interconnection is also verified for a 10-storey building model which has a different stiffness distribution but with the same undamped first natural frequency as the homogeneous model.
Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis
2017-06-23
Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
ERIC Educational Resources Information Center
Hathout, Leith
2007-01-01
Counting the number of internal intersection points made by the diagonals of irregular convex polygons where no three diagonals are concurrent is an interesting problem in discrete mathematics. This paper uses an iterative approach to develop a summation relation which tallies the total number of intersections, and shows that this total can be…
Intrinsic alignments of galaxies in the MassiveBlack-II simulation: Analysis of two-point statistics
Tenneti, Ananth; Singh, Sukhdeep; Mandelbaum, Rachel; ...
2015-03-11
The intrinsic alignment of galaxies with the large-scale density field in an important astrophysical contaminant in upcoming weak lensing surveys. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape (w g₊) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II (MB-II) simulation. We carefully assess the effects on galaxy shapes, misalignment of the stellar component with the dark matter shape and two-point statistics of iterative weighted (by mass and luminosity) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of the reducedmore » tensor but that luminosity versus mass weighting has only negligible effects. Both ED and w g₊ correlations increase in amplitude with subhalo mass (in the range of 10¹⁰ – 6.0 X 10¹⁴h⁻¹ M ⊙), with a weak redshift dependence (from z = 1 to z = 0.06) at fixed mass. At z ~ 0.3, we predict a w g₊ that is in reasonable agreement with SDSS LRG measurements and that decreases in amplitude by a factor of ~ 5–18 for galaxies in the LSST survey. We also compared the intrinsic alignment of centrals and satellites, with clear detection of satellite radial alignments within the host halos. Finally, we show that w g₊ (using subhalos as tracers of density and w δ (using dark matter density) predictions from the simulations agree with that of non-linear alignment models (NLA) at scales where the 2-halo term dominates in the correlations (and tabulate associated NLA fitting parameters). The 1-halo term induces a scale dependent bias at small scales which is not modeled in the NLA model.« less
Intrinsic alignments of galaxies in the MassiveBlack-II simulation: analysis of two-point statistics
NASA Astrophysics Data System (ADS)
Tenneti, Ananth; Singh, Sukhdeep; Mandelbaum, Rachel; di Matteo, Tiziana; Feng, Yu; Khandai, Nishikanta
2015-04-01
The intrinsic alignment of galaxies with the large-scale density field is an important astrophysical contaminant in upcoming weak lensing surveys. We present detailed measurements of the galaxy intrinsic alignments and associated ellipticity-direction (ED) and projected shape (wg+) correlation functions for galaxies in the cosmological hydrodynamic MassiveBlack-II simulation. We carefully assess the effects on galaxy shapes, misalignment of the stellar component with the dark matter shape and two-point statistics of iterative weighted (by mass and luminosity) definitions of the (reduced and unreduced) inertia tensor. We find that iterative procedures must be adopted for a reliable measurement of the reduced tensor but that luminosity versus mass weighting has only negligible effects. Both ED and wg+ correlations increase in amplitude with subhalo mass (in the range of 1010-6.0 × 1014 h-1 M⊙), with a weak redshift dependence (from z = 1 to 0.06) at fixed mass. At z ˜ 0.3, we predict a wg+ that is in reasonable agreement with Sloan Digital Sky Survey luminous red galaxy measurements and that decreases in amplitude by a factor of ˜5-18 for galaxies in the Large Synoptic Survey Telescope survey. We also compared the intrinsic alignments of centrals and satellites, with clear detection of satellite radial alignments within their host haloes. Finally, we show that wg+ (using subhaloes as tracers of density) and wδ+ (using dark matter density) predictions from the simulations agree with that of non-linear alignment (NLA) models at scales where the two-halo term dominates in the correlations (and tabulate associated NLA fitting parameters). The one-halo term induces a scale-dependent bias at small scales which is not modelled in the NLA model.
1989-06-09
Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.
2018-03-01
This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational Awareness computer cluster at the LASR Lab, Texas A&M University. We demonstrate the power of our tool by solving a highly parallel example problem, that is the generation of extremal field maps for optimal spacecraft rendezvous (and eventual orbit debris removal). In addition we demonstrate the need for including perturbative effects in simulations for satellite tracking or data association. The unified Lambert tool is ideal for but not limited to space situational awareness applications.
Co-C and Pd-C Fixed Points for the Evaluation of Facilities and Scales Realization at INRIM and NMC
NASA Astrophysics Data System (ADS)
Battuello, M.; Wang, L.; Girard, F.; Ang, S. H.
2014-04-01
Two hybrid cells for realizing the Co-C and Pd-C fixed points and constructed at Istituto Nazionale di Ricerca Metrologica (INRIM) were used for an evaluation of facilities and procedures adopted by INRIM and National Metrology Institute of Singapore (NMC) for the realization of the solid-liquid phase transitions of high-temperature fixed points and for determining their transition temperatures. Four different furnaces were used for the investigations, i.e., two single-zone furnaces, one of them of the direct-heating type, and two identical three-zone furnaces. The transition temperatures were measured at both institutes by adopting different procedures for realizing the radiation scales, i.e., at INRIM a scheme based on the extrapolation of fixed-point interpolated scales and an International Temperature Scale of 1990 (ITS-90) approach at NMC. The point of inflection (POI) of the melting curves was determined and assumed as a practical representation of the melting temperature. Different methods for deriving the POI were used, and differences as large as some hundredths of a kelvin were found with the different approaches. The POIs of the different melting curves were analyzed with respect to the different possible operative conditions with the aim of deriving reproducibility figures to improve the estimated uncertainty. As regard to the institutes inter-comparison, differences of 0.13 K and 0.29 K were found between INRIM and NMC determinations at the Co-C and Pd-C points, respectively. Such differences are compatible with the combined standard uncertainties of the comparison, which are estimated to be 0.33 K and 0.36 K at the Co-C and Pd-C points, respectively.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
An improved genetic algorithm for designing optimal temporal patterns of neural stimulation
NASA Astrophysics Data System (ADS)
Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.
2017-12-01
Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.
NASA Astrophysics Data System (ADS)
Rosenberg, D. E.; Alafifi, A.
2016-12-01
Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
ULTRA: Underwater Localization for Transit and Reconnaissance Autonomy
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L.
2013-01-01
This software addresses the issue of underwater localization of unmanned vehicles and the inherent drift in their onboard sensors. The software gives a 2 to 3 factor of improvement over the state-of-the-art underwater localization algorithms. The software determines the localization (position, heading) of an AUV (autonomous underwater vehicle) in environments where there is no GPS signal. It accomplishes this using only the commanded position, onboard gyros/accelerometers, and the bathymetry of the bottom provided by an onboard sonar system. The software does not rely on an onboard bathymetry dataset, but instead incrementally determines the position of the AUV while mapping the bottom. In order to enable long-distance underwater navigation by AUVs, a localization method called ULTRA uses registration of the bathymetry data products produced by the onboard forward-looking sonar system for hazard avoidance during a transit to derive the motion and pose of the AUV in order to correct the DR (dead reckoning) estimates. The registration algorithm uses iterative point matching (IPM) combined with surface interpolation of the Iterative Closest Point (ICP) algorithm. This method was used previously at JPL for onboard unmanned ground vehicle localization, and has been optimized for efficient computational and memory use.
Modeling and analysis of the effect of training on V O2 kinetics and anaerobic capacity.
Stirling, J R; Zakynthinaki, M S; Billat, V
2008-07-01
In this paper, we present an application of a number of tools and concepts for modeling and analyzing raw, unaveraged, and unedited breath-by-breath oxygen uptake data. A method for calculating anaerobic capacity is used together with a model, in the form of a set of coupled nonlinear ordinary differential equations to make predictions of the VO(2) kinetics, the time to achieve a percentage of a certain constant oxygen demand, and the time limit to exhaustion at intensities other than those in which we have data. Speeded oxygen kinetics and increased time limit to exhaustion are also investigated using the eigenvalues of the fixed points of our model. We also use a way of analyzing the oxygen uptake kinetics using a plot of V O(2)(t) vs V O(2)(t) which allows one to observe both the fixed point solutions and also the presence of speeded oxygen kinetics following training. A method of plotting the eigenvalue versus oxygen demand is also used which allows one to observe where the maximum amplitude of the so-called slow component will be and also how training has changed the oxygen uptake kinetics by changing the strength of the attracting fixed point for a particular demand.
NASA Astrophysics Data System (ADS)
Cheng, X. Y.; Wang, H. B.; Jia, Y. L.; Dong, YH
2018-05-01
In this paper, an open-closed-loop iterative learning control (ILC) algorithm is constructed for a class of nonlinear systems subjecting to random data dropouts. The ILC algorithm is implemented by a networked control system (NCS), where only the off-line data is transmitted by network while the real-time data is delivered in the point-to-point way. Thus, there are two controllers rather than one in the control system, which makes better use of the saved and current information and thereby improves the performance achieved by open-loop control alone. During the transfer of off-line data between the nonlinear plant and the remote controller data dropout occurs randomly and the data dropout rate is modeled as a binary Bernoulli random variable. Both measurement and control data dropouts are taken into consideration simultaneously. The convergence criterion is derived based on rigorous analysis. Finally, the simulation results verify the effectiveness of the proposed method.
A novel surrogate-based approach for optimal design of electromagnetic-based circuits
NASA Astrophysics Data System (ADS)
Hassan, Abdel-Karim S. O.; Mohamed, Ahmed S. A.; Rabie, Azza A.; Etman, Ahmed S.
2016-02-01
A new geometric design centring approach for optimal design of central processing unit-intensive electromagnetic (EM)-based circuits is introduced. The approach uses norms related to the probability distribution of the circuit parameters to find distances from a point to the feasible region boundaries by solving nonlinear optimization problems. Based on these normed distances, the design centring problem is formulated as a max-min optimization problem. A convergent iterative boundary search technique is exploited to find the normed distances. To alleviate the computation cost associated with the EM-based circuits design cycle, space-mapping (SM) surrogates are used to create a sequence of iteratively updated feasible region approximations. In each SM feasible region approximation, the centring process using normed distances is implemented, leading to a better centre point. The process is repeated until a final design centre is attained. Practical examples are given to show the effectiveness of the new design centring method for EM-based circuits.
NASA Astrophysics Data System (ADS)
Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu
2017-03-01
To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.
Multiscale reconstruction for MR fingerprinting.
Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A
2016-06-01
To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
Mang, Andreas; Biros, George
2017-01-01
We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.
Implicit Contractive Mappings in Modular Metric and Fuzzy Metric Spaces
Hussain, N.; Salimi, P.
2014-01-01
The notion of modular metric spaces being a natural generalization of classical modulars over linear spaces like Lebesgue, Orlicz, Musielak-Orlicz, Lorentz, Orlicz-Lorentz, and Calderon-Lozanovskii spaces was recently introduced. In this paper we investigate the existence of fixed points of generalized α-admissible modular contractive mappings in modular metric spaces. As applications, we derive some new fixed point theorems in partially ordered modular metric spaces, Suzuki type fixed point theorems in modular metric spaces and new fixed point theorems for integral contractions. In last section, we develop an important relation between fuzzy metric and modular metric and deduce certain new fixed point results in triangular fuzzy metric spaces. Moreover, some examples are provided here to illustrate the usability of the obtained results. PMID:25003157
Sitnikov cyclic configuration of N+1-body problem
NASA Astrophysics Data System (ADS)
Shahbaz Ullah, M.; Hassan, M. R.
2014-12-01
This manuscript deals with the generalisation of all previous works on series solutions and linear stability of equilibrium points of the Sitnikov problem. Following Giacaglia (1967), in Sect. 2 we have derived the equation of motion of the infinitesimal mass moving along the z-axis about which the plane of motion is rotating with unit angular velocity. In Sects. 3, 4 and 5 the series solutions of the Sitnikov problem have been developed by the method of MacMillan, Lindstedt-Poincaré and iteration of Green's function respectively. In Sect. 6 the three series solutions have been compared graphically by putting N=2, 3, 4. In Sect. 7 the coordinates of equilibrium points have been calculated. In Sect. 8 the linear stability of equilibrium points has been examined by the method of Murray and Dermott (Solar System Dynamics, Cambridge University Press, Cambridge, 1999) and it was found that the equilibrium points are stable in Sitnikov problem.
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Lax-Friedrichs sweeping scheme for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Kao, Chiu Yen; Osher, Stanley; Qian, Jianliang
2004-05-01
We propose a simple, fast sweeping method based on the Lax-Friedrichs monotone numerical Hamiltonian to approximate viscosity solutions of arbitrary static Hamilton-Jacobi equations in any number of spatial dimensions. By using the Lax-Friedrichs numerical Hamiltonian, we can easily obtain the solution at a specific grid point in terms of its neighbors, so that a Gauss-Seidel type nonlinear iterative method can be utilized. Furthermore, by incorporating a group-wise causality principle into the Gauss-Seidel iteration by following a finite group of characteristics, we have an easy-to-implement, sweeping-type, and fast convergent numerical method. However, unlike other methods based on the Godunov numerical Hamiltonian, some computational boundary conditions are needed in the implementation. We give a simple recipe which enforces a version of discrete min-max principle. Some convergence analysis is done for the one-dimensional eikonal equation. Extensive 2-D and 3-D numerical examples illustrate the efficiency and accuracy of the new approach. To our knowledge, this is the first fast numerical method based on discretizing the Hamilton-Jacobi equation directly without assuming convexity and/or homogeneity of the Hamiltonian.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.