Sample records for faster convergence rate

  1. A Multistrategy Optimization Improved Artificial Bee Colony Algorithm

    PubMed Central

    Liu, Wen

    2014-01-01

    Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924

  2. On adaptive learning rate that guarantees convergence in feedforward networks.

    PubMed

    Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan

    2006-09-01

    This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.

  3. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  4. Suicide rates in European OECD nations converged during the period 1990-2010.

    PubMed

    Bremberg, Sven G

    2017-05-01

    The aim of this study was to investigate, with multiple regression analyses, the effect of selected characteristics on the rate of decrease of suicide rates in 21 OECD (Organisation for Economic Co-operation and Development) nations over the period 1990-2010, with initial levels of suicide rates taken into account. The rate of decrease seems mainly (83%) to be determined by the initial suicide rates in 1990. In nations with relatively high initial rates, the rates decreased faster. The suicide rates also converged. The study indicates that beta convergence alone explained most of the cross-national variations.

  5. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  6. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  7. Distributed Sensing and Processing: A Graphical Model Approach

    DTIC Science & Technology

    2005-11-30

    that Ramanujan graph toplogies maximize the convergence rate of distributed detection consensus algorithms, improving over three orders of...small world type network designs. 14. SUBJECT TERMS Ramanujan graphs, sensor network topology, sensor network...that Ramanujan graphs, for which there are explicit algebraic constructions, have large eigenratios, converging much faster than structured graphs

  8. A Coarse-Alignment Method Based on the Optimal-REQUEST Algorithm

    PubMed Central

    Zhu, Yongyun

    2018-01-01

    In this paper, we proposed a coarse-alignment method for strapdown inertial navigation systems based on attitude determination. The observation vectors, which can be obtained by inertial sensors, usually contain various types of noise, which affects the convergence rate and the accuracy of the coarse alignment. Given this drawback, we studied an attitude-determination method named optimal-REQUEST, which is an optimal method for attitude determination that is based on observation vectors. Compared to the traditional attitude-determination method, the filtering gain of the proposed method is tuned autonomously; thus, the convergence rate of the attitude determination is faster than in the traditional method. Within the proposed method, we developed an iterative method for determining the attitude quaternion. We carried out simulation and turntable tests, which we used to validate the proposed method’s performance. The experiment’s results showed that the convergence rate of the proposed optimal-REQUEST algorithm is faster and that the coarse alignment’s stability is higher. In summary, the proposed method has a high applicability to practical systems. PMID:29337895

  9. Weighted least squares phase unwrapping based on the wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia

    2007-01-01

    The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.

  10. The application of improved neural network in hydrocarbon reservoir prediction

    NASA Astrophysics Data System (ADS)

    Peng, Xiaobo

    2013-03-01

    This paper use BP neural network techniques to realize hydrocarbon reservoir predication easier and faster in tarim basin in oil wells. A grey - cascade neural network model is proposed and it is faster convergence speed and low error rate. The new method overcomes the shortcomings of traditional BP neural network convergence slow, easy to achieve extreme minimum value. This study had 220 sets of measured logging data to the sample data training mode. By changing the neuron number and types of the transfer function of hidden layers, the best work prediction model is analyzed. The conclusion is the model which can produce good prediction results in general, and can be used for hydrocarbon reservoir prediction.

  11. Finite-time containment control of perturbed multi-agent systems based on sliding-mode control

    NASA Astrophysics Data System (ADS)

    Yu, Di; Ji, Xiang Yang

    2018-01-01

    Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.

  12. On stochastic differential equations with arbitrarily slow convergence rates for strong approximation in two space dimensions.

    PubMed

    Gerencsér, Máté; Jentzen, Arnulf; Salimova, Diyora

    2017-11-01

    In a recent article (Jentzen et al. 2016 Commun. Math. Sci. 14 , 1477-1500 (doi:10.4310/CMS.2016.v14.n6.a1)), it has been established that, for every arbitrarily slow convergence speed and every natural number d ∈{4,5,…}, there exist d -dimensional stochastic differential equations with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper, we strengthen the above result by proving that this slow convergence phenomenon also arises in two ( d =2) and three ( d =3) space dimensions.

  13. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  14. Seismic behaviour of mountain belts controlled by plate convergence rate

    NASA Astrophysics Data System (ADS)

    Dal Zilio, Luca; van Dinther, Ylona; Gerya, Taras V.; Pranger, Casper C.

    2018-01-01

    The relative contribution of tectonic and kinematic processes to seismic behaviour of mountain belts is still controversial. To understand the partitioning between these processes we developed a model that simulates both tectonic and seismic processes in a continental collision setting. These 2D seismo-thermo-mechanical (STM) models obtain a Gutenberg-Richter frequency-magnitude distribution due to spontaneous events occurring throughout the orogen. Our simulations suggest that both the corresponding slope (b value) and maximum earthquake magnitude (MWmax) correlate linearly with plate convergence rate. By analyzing 1D rheological profiles and isotherm depths we demonstrate that plate convergence rate controls the brittle strength through a rheological feedback with temperature and strain rate. Faster convergence leads to cooler temperatures and also results in more larger seismogenic domains, thereby increasing both MWmax and the relative number of large earthquakes (decreasing b value). This mechanism also predicts a more seismogenic lower crust, which is confirmed by a transition from uni- to bi-modal hypocentre depth distributions in our models. This transition and a linear relation between convergence rate and b value and MWmax is supported by our comparison of earthquakes recorded across the Alps, Apennines, Zagros and Himalaya. These results imply that deformation in the Alps occurs in a more ductile manner compared to the Himalayas, thereby reducing its seismic hazard. Furthermore, a second set of experiments with higher temperature and different orogenic architecture shows the same linear relation with convergence rate, suggesting that large-scale tectonic structure plays a subordinate role. We thus propose that plate convergence rate, which also controls the average differential stress of the orogen and its linear relation to the b value, is the first-order parameter controlling seismic hazard of mountain belts.

  15. Optimizing the learning rate for adaptive estimation of neural encoding models

    PubMed Central

    2018-01-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069

  16. Optimizing the learning rate for adaptive estimation of neural encoding models.

    PubMed

    Hsieh, Han-Lin; Shanechi, Maryam M

    2018-05-01

    Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.

  17. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  18. Multi-modulus algorithm based on global artificial fish swarm intelligent optimization of DNA encoding sequences.

    PubMed

    Guo, Y C; Wang, H; Wu, H P; Zhang, M Q

    2015-12-21

    Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.

  19. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  20. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks.

    PubMed

    Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena

    2014-01-01

    Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.

  1. Indirect addressing and load balancing for faster solution to Mandelbrot Set on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1989-01-01

    SIMD computers with local indirect addressing allow programs to have queues and buffers, making certain kinds of problems much more efficient. Examined here are a class of problems characterized by computations on data points where the computation is identical, but the convergence rate is data dependent. Normally, in this situation, the algorithm time is governed by the maximum number of iterations required by each point. Using indirect addressing allows a processor to proceed to the next data point when it is done, reducing the overall number of iterations required to approach the mean convergence rate when a sufficiently large problem set is solved. Load balancing techniques can be applied for additional performance improvement. Simulations of this technique applied to solving Mandelbrot Sets indicate significant performance gains.

  2. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  3. Variable input observer for structural health monitoring of high-rate systems

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob

    2017-02-01

    The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.

  4. Performance of Nonlinear Finite-Difference Poisson-Boltzmann Solvers

    PubMed Central

    Cai, Qin; Hsieh, Meng-Juei; Wang, Jun; Luo, Ray

    2014-01-01

    We implemented and optimized seven finite-difference solvers for the full nonlinear Poisson-Boltzmann equation in biomolecular applications, including four relaxation methods, one conjugate gradient method, and two inexact Newton methods. The performance of the seven solvers was extensively evaluated with a large number of nucleic acids and proteins. Worth noting is the inexact Newton method in our analysis. We investigated the role of linear solvers in its performance by incorporating the incomplete Cholesky conjugate gradient and the geometric multigrid into its inner linear loop. We tailored and optimized both linear solvers for faster convergence rate. In addition, we explored strategies to optimize the successive over-relaxation method to reduce its convergence failures without too much sacrifice in its convergence rate. Specifically we attempted to adaptively change the relaxation parameter and to utilize the damping strategy from the inexact Newton method to improve the successive over-relaxation method. Our analysis shows that the nonlinear methods accompanied with a functional-assisted strategy, such as the conjugate gradient method and the inexact Newton method, can guarantee convergence in the tested molecules. Especially the inexact Newton method exhibits impressive performance when it is combined with highly efficient linear solvers that are tailored for its special requirement. PMID:24723843

  5. Orthogonalizing EM: A design-based least squares algorithm.

    PubMed

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.

  6. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  7. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  8. A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING.

    PubMed

    Bingsheng, He; Liu, Han; Wang, Zhaoran; Yuan, Xiaoming

    2014-07-01

    In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O (1/ t ). A worst-case O (1/ t ) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O (1/ t ) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.

  9. Orthogonalizing EM: A design-based least squares algorithm

    PubMed Central

    Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z. G.

    2016-01-01

    We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p. Supplementary materials for this article are available online. PMID:27499558

  10. A STRICTLY CONTRACTIVE PEACEMAN–RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING

    PubMed Central

    BINGSHENG, HE; LIU, HAN; WANG, ZHAORAN; YUAN, XIAOMING

    2014-01-01

    In this paper, we focus on the application of the Peaceman–Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas–Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing. PMID:25620862

  11. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  12. Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1

    DTIC Science & Technology

    2012-10-17

    nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the

  13. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  14. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  15. The new and computationally efficient MIL-SOM algorithm: potential benefits for visualization and analysis of a large-scale high-dimensional clinically acquired geographic data.

    PubMed

    Oyana, Tonny J; Achenie, Luke E K; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM.

  16. The New and Computationally Efficient MIL-SOM Algorithm: Potential Benefits for Visualization and Analysis of a Large-Scale High-Dimensional Clinically Acquired Geographic Data

    PubMed Central

    Oyana, Tonny J.; Achenie, Luke E. K.; Heo, Joon

    2012-01-01

    The objective of this paper is to introduce an efficient algorithm, namely, the mathematically improved learning-self organizing map (MIL-SOM) algorithm, which speeds up the self-organizing map (SOM) training process. In the proposed MIL-SOM algorithm, the weights of Kohonen's SOM are based on the proportional-integral-derivative (PID) controller. Thus, in a typical SOM learning setting, this improvement translates to faster convergence. The basic idea is primarily motivated by the urgent need to develop algorithms with the competence to converge faster and more efficiently than conventional techniques. The MIL-SOM algorithm is tested on four training geographic datasets representing biomedical and disease informatics application domains. Experimental results show that the MIL-SOM algorithm provides a competitive, better updating procedure and performance, good robustness, and it runs faster than Kohonen's SOM. PMID:22481977

  17. Numerical algorithms for scatter-to-attenuation reconstruction in PET: empirical comparison of convergence, acceleration, and the effect of subsets.

    PubMed

    Berker, Yannick; Karp, Joel S; Schulz, Volkmar

    2017-09-01

    The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.

  18. {lambda} elements for one-dimensional singular problems with known strength of singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, K.K.; Surana, K.S.

    1996-10-01

    This paper presents a new and general procedure for designing special elements called {lambda} elements for one dimensional singular problems where the strength of the singularity is know. The {lambda} elements presented here are of type C{sup 0}. These elements also provide inter-element C{sup 0} continuity with p-version elements. The {lambda} elements do not require a precise knowledge of the extent of singular zone, i.e., their use may be extended beyond the singular zone. When {lambda} elements are used at the singularity, a singular problem behaves like a smooth problem thereby eliminating the need for h, p-adaptive processes all together.more » One dimensional steady state radial flow of an upper convected Maxwell fluid is considered as a sample problem. Least squares approach (or least squares finite element formulation: LSFEF) is used to construct the integral form (error functional I) from the differential equations. Numerical results presented for radially inward flow with inner radius r{sub i} = 0.1, 0.01, 0.001, 0.0001, 0.00001, and Deborah number of 2 (De = 2) demonstrate the accuracy, faster convergence of the iterative solution procedure, faster convergence rate of the error functional and mesh independent characteristics of the {lambda} elements regardless of the severity of the singularity.« less

  19. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.

  20. New spatial diversity equalizer based on PLL

    NASA Astrophysics Data System (ADS)

    Rao, Wei

    2011-10-01

    A new Spatial Diversity Equalizer (SDE) based on phase-locked loop (PLL) is proposed to overcome the inter-symbol interference (ISI) and phase rotations simultaneously in the digital communication system. The proposed SDE consists of equal gain combining technique based on a famous blind equalization algorithm constant modulus algorithm (CMA) and a PLL. Compared with conventional SDE, the proposed SDE has not only faster convergence rate and lower residual error but also the ability to recover carrier phase rotation. The efficiency of the method is proved by computer simulation.

  1. Projection methods for line radiative transfer in spherical media.

    NASA Astrophysics Data System (ADS)

    Anusha, L. S.; Nagendra, K. N.

    An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).

  2. The application of Legendre-tau approximation to parameter identification for delay and partial differential equations

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1983-01-01

    Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.

  3. Primal-dual and forward gradient implementation for quantitative susceptibility mapping.

    PubMed

    Kee, Youngwook; Deh, Kofi; Dimov, Alexey; Spincemaille, Pascal; Wang, Yi

    2017-12-01

    To investigate the computational aspects of the prior term in quantitative susceptibility mapping (QSM) by (i) comparing the Gauss-Newton conjugate gradient (GNCG) algorithm that uses numerical conditioning (ie, modifies the prior term) with a primal-dual (PD) formulation that avoids this, and (ii) carrying out a comparison between a central and forward difference scheme for the discretization of the prior term. A spatially continuous formulation of the regularized QSM inversion problem and its PD formulation were derived. The Chambolle-Pock algorithm for PD was implemented and its convergence behavior was compared with that of GNCG for the original QSM. Forward and central difference schemes were compared in terms of the presence of checkerboard artifacts. All methods were tested and validated on a gadolinium phantom, ex vivo brain blocks, and in vivo brain MRI data with respect to COSMOS. The PD approach provided a faster convergence rate than GNCG. The GNCG convergence rate slowed considerably with smaller (more accurate) values of the conditioning parameter. Using a forward difference suppressed the checkerboard artifacts in QSM, as compared with the central difference. The accuracy of PD and GNCG were validated based on excellent correlation with COSMOS. The PD approach with forward difference for the gradient showed improved convergence and accuracy over the GNCG method using central difference. Magn Reson Med 78:2416-2427, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  5. Joint compensation scheme of polarization crosstalk, intersymbol interference, frequency offset, and phase noise based on cascaded Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong

    2018-02-01

    A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.

  6. Noise can speed convergence in Markov chains.

    PubMed

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  7. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  8. A viscoplastic shear-zone model for deep (15-50 km) slow-slip events at plate convergent margins

    NASA Astrophysics Data System (ADS)

    Yin, An; Xie, Zhoumin; Meng, Lingsen

    2018-06-01

    A key issue in understanding the physics of deep (15-50 km) slow-slip events (D-SSE) at plate convergent margins is how their initially unstable motion becomes stabilized. Here we address this issue by quantifying a rate-strengthening mechanism using a viscoplastic shear-zone model inspired by recent advances in field observations and laboratory experiments. The well-established segmentation of slip modes in the downdip direction of a subduction shear zone allows discretization of an interseismic forearc system into the (1) frontal segment bounded by an interseismically locked megathrust, (2) middle segment bounded by episodically locked and unlocked viscoplastic shear zone, and (3) interior segment that slips freely. The three segments are assumed to be linked laterally by two springs that tighten with time, and the increasing elastic stress due to spring tightening eventually leads to plastic failure and initial viscous shear. This simplification leads to seven key model parameters that dictate a wide range of mechanical behaviors of an idealized convergent margin. Specifically, the viscoplastic rheology requires the initially unstable sliding to be terminated nearly instantaneously at a characteristic velocity, which is followed by stable sliding (i.e., slow-slip). The characteristic velocity, which is on the order of <10-7 m/s for the convergent margins examined in this study, depends on the (1) effective coefficient of friction, (2) thickness, (3) depth, and (4) viscosity of the viscoplastic shear zone. As viscosity decreases exponentially with temperature, our model predicts faster slow-slip rates, shorter slow-slip durations, more frequent slow-slip occurrences, and larger slow-slip magnitudes at warmer convergent margins.

  9. Local multiplicative Schwarz algorithms for convection-diffusion equations

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Sarkis, Marcus

    1995-01-01

    We develop a new class of overlapping Schwarz type algorithms for solving scalar convection-diffusion equations discretized by finite element or finite difference methods. The preconditioners consist of two components, namely, the usual two-level additive Schwarz preconditioner and the sum of some quadratic terms constructed by using products of ordered neighboring subdomain preconditioners. The ordering of the subdomain preconditioners is determined by considering the direction of the flow. We prove that the algorithms are optimal in the sense that the convergence rates are independent of the mesh size, as well as the number of subdomains. We show by numerical examples that the new algorithms are less sensitive to the direction of the flow than either the classical multiplicative Schwarz algorithms, and converge faster than the additive Schwarz algorithms. Thus, the new algorithms are more suitable for fluid flow applications than the classical additive or multiplicative Schwarz algorithms.

  10. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  11. Fast Back-Propagation Learning Using Steep Activation Functions and Automatic Weight

    Treesearch

    Tai-Hoon Cho; Richard W. Conners; Philip A. Araman

    1992-01-01

    In this paper, several back-propagation (BP) learning speed-up algorithms that employ the ãgainä parameter, i.e., steepness of the activation function, are examined. Simulations will show that increasing the gain seemingly increases the speed of convergence and that these algorithms can converge faster than the standard BP learning algorithm on some problems. However,...

  12. Practical differences among probabilities, possibilities, and credibilities

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Moulin, Caroline

    2002-03-01

    This paper presents some important differences that exist between theories, which allow the uncertainty management in data fusion. The main comparative results illustrated in this paper are the followings: Incompatibility between decisions got from probabilities and credibilities is highlighted. In the dynamic frame, as remarked in [19] or [17], belief and plausibility of Dempster-Shafer model do not frame the Bayesian probability. This framing can however be obtained by the Modified Dempster-Shafer approach. It also can be obtained in the Bayesian framework either by simulation techniques, or with a studentization. The uncommitted in the Dempster-Shafer way, e.g. the mass accorded to the ignorance, gives a mechanism similar to the reliability in the Bayesian model. Uncommitted mass in Dempster-Shafer theory or reliability in Bayes theory act like a filter that weakens extracted information, and improves robustness to outliners. So, it is logical to observe on examples like the one presented particularly by D.M. Buede, a faster convergence of a Bayesian method that doesn't take into account the reliability, in front of Dempster-Shafer method which uses uncommitted mass. But, on Bayesian masses, if reliability is taken into account, at the same level that the uncommited, e.g. F=1-m, we observe an equivalent rate for convergence. When Dempster-Shafer and Bayes operator are informed by uncertainty, faster or lower convergence can be exhibited on non Bayesian masses. This is due to positive or negative synergy between information delivered by sensors. This effect is a direct consequence of non additivity when considering non Bayesian masses. Unknowledge of the prior in bayesian techniques can be quickly compensated by information accumulated as time goes on by a set of sensors. All these results are presented on simple examples, and developed when necessary.

  13. Formation and stability of a double subduction system: a numerical study

    NASA Astrophysics Data System (ADS)

    Pusok, A. E.; Stegman, D. R.

    2017-12-01

    Examples of double subduction systems can be found in both modern (Izu-Bonin-Marianas and Ryukyu arcs, e.g. Hall [1997]) and ancient (Kohistan arc in Western Himalayas, e.g. Burg et al. [2006]) tectonic record. A double subduction system has been proposed to explain the high convergence rate observed for the India-Eurasia convergence [Aitchison et al., 2000, Jagoutz et al., 2015; Holt et al., 2017]. Rates of convergence across coupled double subduction systems can be significantly faster than across single subduction systems because of slab pull by two slabs. However, despite significant geological and geophysical observations, questions regarding double subduction remain largely unexplored. For example, it is unclear how a double subduction system forms and remains stable over millions of years. Previous numerical studies of double subduction either introduced weak zones to initiate subduction [Mishin et al., 2008] or both the subduction systems were already initiated [Jagoutz et al., 2015, Holt et al., 2017], thus assuming a priori information regarding the initial position of the two subduction zones. Moreover, the driving forces initiating a stable double subduction system remain unclear. In the context of India-Eurasia, Cande and Stegman [2011] found evidence the Reunion mantle plume head provided an ephemeral driving force on both the Indian and African plates for as long as 25 Million years, and had significant influence on plate boundaries in the region. In this study, we perform 2D and 3D numerical simulations using the code LaMEM [Kaus et al., 2016] to investigate i) subduction initiation of a secondary system in an already initiated single subduction system, and ii) the dynamics and stability of the newly formed double subduction system. We start from a single subduction setup, where subduction is already initiated (mature) and we stress the system by controlling the convergence rate of the system (i.e. imposing influx/outflux boundary conditions). Under certain conditions, a second subduction may develop and transform into a stable double subduction system. Results suggest that the fate of the incipient secondary subduction depends on internal factors (i.e. buoyancy and rheology), but also on the dynamics of the primary subduction zone and the boundary conditions (i.e. convergence rate).

  14. Direct measurement of asperity contact growth in quartz at hydrothermal conditions

    USGS Publications Warehouse

    Beeler, Nicholas M.; Hickman, Stephen H.

    2015-01-01

    Earthquake recurrence requires interseismic fault restrengthening which results from solid state deformation in room-temperature friction and indentation experiments. In contrast exhumed fault zones show solution-transport processes such as pressure solution and contact overgrowths influence fault zone properties . In the absence of fluid flow, overgrowths are driven by gradients in surface curvature where material is dissolved, diffuses, and precipitates at the contact without convergence normal to the contact. To determine the rate of overgrowth for quartz, we conducted single contact experiments in an externally heated pressure vessel. Convergence was continuously monitored using reflected-light interferometry through a long-working-distance microscope. Contact normal force was constant with an initial effective normal stress of 1.7 MPa, temperature was between 350 and 530{degree sign}C, and water pressure was constant at 150 MPa. Two control experiments were conducted: one dry at 425{degree sign}C and one bi-material (sapphire) at 425{degree sign}C and 150 MPa water pressure. No contact growth or convergence was observed in the controls. For wet single-phase contacts, growth was initially rapid and then decreased with time. No convergence was observed. Fluid inclusions indicate that the contact is not uniformly wetted. The contact is bounded by small regions of high aperture, reflecting local free-face dissolution as the source for the overgrowth. The apparent activation energy is ~125 kJ/mol. Extrapolation predicts rates of contact area increase orders of magnitude faster than in dry, room-temperature and hydrothermal friction experiments, suggesting that natural strength recovery near the base of the seismogenic zone could be dominated by contact overgrowth.

  15. Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.

    2005-01-01

    For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.

  16. Acceleration of Convergence to Equilibrium in Markov Chains by Breaking Detailed Balance

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2017-07-01

    We analyse and interpret the effects of breaking detailed balance on the convergence to equilibrium of conservative interacting particle systems and their hydrodynamic scaling limits. For finite systems of interacting particles, we review existing results showing that irreversible processes converge faster to their steady state than reversible ones. We show how this behaviour appears in the hydrodynamic limit of such processes, as described by macroscopic fluctuation theory, and we provide a quantitative expression for the acceleration of convergence in this setting. We give a geometrical interpretation of this acceleration, in terms of currents that are antisymmetric under time-reversal and orthogonal to the free energy gradient, which act to drive the system away from states where (reversible) gradient-descent dynamics result in slow convergence to equilibrium.

  17. Decentralized finite-time attitude synchronization for multiple rigid spacecraft via a novel disturbance observer.

    PubMed

    Zong, Qun; Shao, Shikai

    2016-11-01

    This paper investigates decentralized finite-time attitude synchronization for a group of rigid spacecraft by using quaternion with the consideration of environmental disturbances, inertia uncertainties and actuator saturation. Nonsingular terminal sliding mode (TSM) is used for controller design. Firstly, a theorem is proven that there always exists a kind of TSM that converges faster than fast terminal sliding mode (FTSM) for quaternion-descripted attitude control system. Controller with this kind of TSM has faster convergence and reduced computation than FTSM controller. Then, combining with an adaptive parameter estimation strategy, a novel terminal sliding mode disturbance observer is proposed. The proposed disturbance observer needs no upper bound information of the lumped uncertainties or their derivatives. On the basis of undirected topology and the disturbance observer, decentralized attitude synchronization control laws are designed and all attitude errors are ensured to converge to small regions in finite time. As for actuator saturation problem, an auxiliary variable is introduced and accommodated by the disturbance observer. Finally, simulation results are given and the effectiveness of the proposed control scheme is testified. Copyright © 2016. Published by Elsevier Ltd.

  18. Efficiency of Adaptive Temperature-Based Replica Exchange for Sampling Large-Scale Protein Conformational Transitions.

    PubMed

    Zhang, Weihong; Chen, Jianhan

    2013-06-11

    Temperature-based replica exchange (RE) is now considered a principal technique for enhanced sampling of protein conformations. It is also recognized that existence of sharp cooperative transitions (such as protein folding/unfolding) can lead to temperature exchange bottlenecks and significantly reduce the sampling efficiency. Here, we revisit two adaptive temperature-based RE protocols, namely, exchange equalization (EE) and current maximization (CM), that were previously examined using atomistic simulations (Lee and Olson, J. Chem. Physics2011, 134, 24111). Both protocols aim to overcome exchange bottlenecks by adaptively adjusting the simulation temperatures, either to achieve uniform exchange rates (in EE) or to maximize temperature diffusion (CM). By designing a realistic yet computationally tractable coarse-grained protein model, one can sample many reversible folding/unfolding transitions using conventional constant temperature molecular dynamics (MD), standard REMD, EE-REMD, and CM-REMD. This allows rigorous evaluation of the sampling efficiency, by directly comparing the rates of folding/unfolding transitions and convergence of various thermodynamic properties of interest. The results demonstrate that both EE and CM can indeed enhance temperature diffusion compared to standard RE, by ∼3- and over 10-fold, respectively. Surprisingly, the rates of reversible folding/unfolding transitions are similar in all three RE protocols. The convergence rates of several key thermodynamic properties, including the folding stability and various 1D and 2D free energy surfaces, are also similar. Therefore, the efficiency of RE protocols does not appear to be limited by temperature diffusion, but by the inherent rates of spontaneous large-scale conformational rearrangements. This is particularly true considering that virtually all RE simulations of proteins in practice involve exchange attempt frequencies (∼ps(-1)) that are several orders of magnitude faster than the slowest protein motions (∼μs(-1)). Our results also suggest that the efficiency of RE will not likely be improved by other protocols that aim to accelerate exchange or temperature diffusion. Instead, protocols with some types of guided tempering will likely be necessary to drive faster large-scale conformational transitions.

  19. A Kalman Filter for SINS Self-Alignment Based on Vector Observation.

    PubMed

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-29

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q -method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate.

  20. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  1. A Kalman Filter for SINS Self-Alignment Based on Vector Observation

    PubMed Central

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Tong, Jinwu

    2017-01-01

    In this paper, a self-alignment method for strapdown inertial navigation systems based on the q-method is studied. In addition, an improved method based on integrating gravitational apparent motion to form apparent velocity is designed, which can reduce the random noises of the observation vectors. For further analysis, a novel self-alignment method using a Kalman filter based on adaptive filter technology is proposed, which transforms the self-alignment procedure into an attitude estimation using the observation vectors. In the proposed method, a linear psuedo-measurement equation is adopted by employing the transfer method between the quaternion and the observation vectors. Analysis and simulation indicate that the accuracy of the self-alignment is improved. Meanwhile, to improve the convergence rate of the proposed method, a new method based on parameter recognition and a reconstruction algorithm for apparent gravitation is devised, which can reduce the influence of the random noises of the observation vectors. Simulations and turntable tests are carried out, and the results indicate that the proposed method can acquire sound alignment results with lower standard variances, and can obtain higher alignment accuracy and a faster convergence rate. PMID:28146059

  2. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  3. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  4. Asian children's verbal development: A comparison of the United States and Australia.

    PubMed

    Choi, Kate H; Hsin, Amy; McLanahan, Sara S

    2015-07-01

    Using longitudinal cohort studies from Australia and the United States, we assess the pervasiveness of the Asian academic advantage by documenting White-Asian differences in verbal development from early to middle childhood. In the United States, Asian children begin school with higher verbal scores than Whites, but their advantage erodes over time. The initial verbal advantage of Asian American children is partly due to their parent's socioeconomic advantage and would have been larger had it not been for their mother's English deficiency. In Australia, Asian children have lower verbal scores than Whites at age 4, but their scores grow a faster rate and converge towards those of Whites by age 8. The initial verbal disadvantage of Asian Australian children is partly due to their mother's English deficiency and would have been larger had it not been for their Asian parent's educational advantage. Asian Australian children's verbal scores grow at a faster pace, in part, because of their parent's educational advantage. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Numerical Evaluation of P-Multigrid Method for the Solution of Discontinuous Galerkin Discretizations of Diffusive Equations

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Helenbrook, B. T.

    2005-01-01

    This paper describes numerical experiments with P-multigrid to corroborate analysis, validate the present implementation, and to examine issues that arise in the implementations of the various combinations of relaxation schemes, discretizations and P-multigrid methods. The two approaches to implement P-multigrid presented here are equivalent for most high-order discretization methods such as spectral element, SUPG, and discontinuous Galerkin applied to advection; however it is discovered that the approach that mimics the common geometric multigrid implementation is less robust, and frequently unstable when applied to discontinuous Galerkin discretizations of di usion. Gauss-Seidel relaxation converges 40% faster than block Jacobi, as predicted by analysis; however, the implementation of Gauss-Seidel is considerably more expensive that one would expect because gradients in most neighboring elements must be updated. A compromise quasi Gauss-Seidel relaxation method that evaluates the gradient in each element twice per iteration converges at rates similar to those predicted for true Gauss-Seidel.

  6. Selective advantage of implementing optimal contributions selection and timescales for the convergence of long-term genetic contributions.

    PubMed

    Howard, David M; Pong-Wong, Ricardo; Knap, Pieter W; Kremer, Valentin D; Woolliams, John A

    2018-05-10

    Optimal contributions selection (OCS) provides animal breeders with a framework for maximising genetic gain for a predefined rate of inbreeding. Simulation studies have indicated that the source of the selective advantage of OCS is derived from breeding decisions being more closely aligned with estimates of Mendelian sampling terms ([Formula: see text]) of selection candidates, rather than estimated breeding values (EBV). This study represents the first attempt to assess the source of the selective advantage provided by OCS using a commercial pig population and by testing three hypotheses: (1) OCS places more emphasis on [Formula: see text] compared to EBV for determining which animals were selected as parents, (2) OCS places more emphasis on [Formula: see text] compared to EBV for determining which of those parents were selected to make a long-term genetic contribution (r), and (3) OCS places more emphasis on [Formula: see text] compared to EBV for determining the magnitude of r. The population studied also provided an opportunity to investigate the convergence of r over time. Selection intensity limited the number of males available for analysis, but females provided some evidence that the selective advantage derived from applying an OCS algorithm resulted from greater weighting being placed on [Formula: see text] during the process of decision-making. Male r were found to converge initially at a faster rate than female r, with approximately 90% convergence achieved within seven generations across both sexes. This study of commercial data provides some support to results from theoretical and simulation studies that the source of selective advantage from OCS comes from [Formula: see text]. The implication that genomic selection (GS) improves estimation of [Formula: see text] should allow for even greater genetic gains for a predefined rate of inbreeding, once the synergistic benefits of combining OCS and GS are realised.

  7. Perturbations in the initial soil moisture conditions: Impacts on hydrologic simulation in a large river basin

    NASA Astrophysics Data System (ADS)

    Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal

    2018-06-01

    Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.

  8. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  9. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    NASA Astrophysics Data System (ADS)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  10. A fast iterative scheme for the linearized Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Wu, Lei; Zhang, Jun; Liu, Haihu; Zhang, Yonghao; Reese, Jason M.

    2017-06-01

    Iterative schemes to find steady-state solutions to the Boltzmann equation are efficient for highly rarefied gas flows, but can be very slow to converge in the near-continuum flow regime. In this paper, a synthetic iterative scheme is developed to speed up the solution of the linearized Boltzmann equation by penalizing the collision operator L into the form L = (L + Nδh) - Nδh, where δ is the gas rarefaction parameter, h is the velocity distribution function, and N is a tuning parameter controlling the convergence rate. The velocity distribution function is first solved by the conventional iterative scheme, then it is corrected such that the macroscopic flow velocity is governed by a diffusion-type equation that is asymptotic-preserving into the Navier-Stokes limit. The efficiency of this new scheme is assessed by calculating the eigenvalue of the iteration, as well as solving for Poiseuille and thermal transpiration flows. We find that the fastest convergence of our synthetic scheme for the linearized Boltzmann equation is achieved when Nδ is close to the average collision frequency. The synthetic iterative scheme is significantly faster than the conventional iterative scheme in both the transition and the near-continuum gas flow regimes. Moreover, due to its asymptotic-preserving properties, the synthetic iterative scheme does not need high spatial resolution in the near-continuum flow regime, which makes it even faster than the conventional iterative scheme. Using this synthetic scheme, with the fast spectral approximation of the linearized Boltzmann collision operator, Poiseuille and thermal transpiration flows between two parallel plates, through channels of circular/rectangular cross sections and various porous media are calculated over the whole range of gas rarefaction. Finally, the flow of a Ne-Ar gas mixture is solved based on the linearized Boltzmann equation with the Lennard-Jones intermolecular potential for the first time, and the difference between these results and those using the hard-sphere potential is discussed.

  11. Modification Of Learning Rate With Lvq Model Improvement In Learning Backpropagation

    NASA Astrophysics Data System (ADS)

    Tata Hardinata, Jaya; Zarlis, Muhammad; Budhiarti Nababan, Erna; Hartama, Dedy; Sembiring, Rahmat W.

    2017-12-01

    One type of artificial neural network is a backpropagation, This algorithm trained with the network architecture used during the training as well as providing the correct output to insert a similar but not the same with the architecture in use at training.The selection of appropriate parameters also affects the outcome, value of learning rate is one of the parameters which influence the process of training, Learning rate affects the speed of learning process on the network architecture.If the learning rate is set too large, then the algorithm will become unstable and otherwise the algorithm will converge in a very long period of time.So this study was made to determine the value of learning rate on the backpropagation algorithm. LVQ models of learning rate is one of the models used in the determination of the value of the learning rate of the algorithm LVQ.By modifying this LVQ model to be applied to the backpropagation algorithm. From the experimental results known to modify the learning rate LVQ models were applied to the backpropagation algorithm learning process becomes faster (epoch less).

  12. Convergence acceleration of molecular dynamics methods for shocked materials using velocity scaling

    NASA Astrophysics Data System (ADS)

    Taylor, DeCarlos E.

    2017-03-01

    In this work, a convergence acceleration method applicable to extended system molecular dynamics techniques for shock simulations of materials is presented. The method uses velocity scaling to reduce the instantaneous value of the Rankine-Hugoniot conservation of energy constraint used in extended system molecular dynamics methods to more rapidly drive the system towards a converged Hugoniot state. When used in conjunction with the constant stress Hugoniostat method, the velocity scaled trajectories show faster convergence to the final Hugoniot state with little difference observed in the converged Hugoniot energy, pressure, volume and temperature. A derivation of the scale factor is presented and the performance of the technique is demonstrated using the boron carbide armour ceramic as a test material. It is shown that simulation of boron carbide Hugoniot states, from 5 to 20 GPa, using both a classical Tersoff potential and an ab initio density functional, are more rapidly convergent when the velocity scaling algorithm is applied. The accelerated convergence afforded by the current algorithm enables more rapid determination of Hugoniot states thus reducing the computational demand of such studies when using expensive ab initio or classical potentials.

  13. How hot? Systematic convergence of the replica exchange method using multiple reservoirs.

    PubMed

    Ruscio, Jory Z; Fawzi, Nicolas L; Head-Gordon, Teresa

    2010-02-01

    We have devised a systematic approach to converge a replica exchange molecular dynamics simulation by dividing the full temperature range into a series of higher temperature reservoirs and a finite number of lower temperature subreplicas. A defined highest temperature reservoir of equilibrium conformations is used to help converge a lower but still hot temperature subreplica, which in turn serves as the high-temperature reservoir for the next set of lower temperature subreplicas. The process is continued until an optimal temperature reservoir is reached to converge the simulation at the target temperature. This gradual convergence of subreplicas allows for better and faster convergence at the temperature of interest and all intermediate temperatures for thermodynamic analysis, as well as optimizing the use of multiple processors. We illustrate the overall effectiveness of our multiple reservoir replica exchange strategy by comparing sampling and computational efficiency with respect to replica exchange, as well as comparing methods when converging the structural ensemble of the disordered Abeta(21-30) peptide simulated with explicit water by comparing calculated Rotating Overhauser Effect Spectroscopy intensities to experimentally measured values. Copyright 2009 Wiley Periodicals, Inc.

  14. CNSFV code development, virtual zone Navier-Stokes computations of oscillating control surfaces and computational support of the laminar flow supersonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Klopfer, Goetz H.

    1993-01-01

    The work performed during the past year on this cooperative agreement covered two major areas and two lesser ones. The two major items included further development and validation of the Compressible Navier-Stokes Finite Volume (CNSFV) code and providing computational support for the Laminar Flow Supersonic Wind Tunnel (LFSWT). The two lesser items involve a Navier-Stokes simulation of an oscillating control surface at transonic speeds and improving the basic algorithm used in the CNSFV code for faster convergence rates and more robustness. The work done in all four areas is in support of the High Speed Research Program at NASA Ames Research Center.

  15. SCoPE: an efficient method of Cosmological Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in

    Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of themore » chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.« less

  16. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  17. Effects of mesh style and grid convergence on particle deposition in bifurcating airway models with comparisons to experimental data.

    PubMed

    Longest, P Worth; Vinchurkar, Samir

    2007-04-01

    A number of research studies have employed a wide variety of mesh styles and levels of grid convergence to assess velocity fields and particle deposition patterns in models of branching biological systems. Generating structured meshes based on hexahedral elements requires significant time and effort; however, these meshes are often associated with high quality solutions. Unstructured meshes that employ tetrahedral elements can be constructed much faster but may increase levels of numerical diffusion, especially in tubular flow systems with a primary flow direction. The objective of this study is to better establish the effects of mesh generation techniques and grid convergence on velocity fields and particle deposition patterns in bifurcating respiratory models. In order to achieve this objective, four widely used mesh styles including structured hexahedral, unstructured tetrahedral, flow adaptive tetrahedral, and hybrid grids have been considered for two respiratory airway configurations. Initial particle conditions tested are based on the inlet velocity profile or the local inlet mass flow rate. Accuracy of the simulations has been assessed by comparisons to experimental in vitro data available in the literature for the steady-state velocity field in a single bifurcation model as well as the local particle deposition fraction in a double bifurcation model. Quantitative grid convergence was assessed based on a grid convergence index (GCI), which accounts for the degree of grid refinement. The hexahedral mesh was observed to have GCI values that were an order of magnitude below the unstructured tetrahedral mesh values for all resolutions considered. Moreover, the hexahedral mesh style provided GCI values of approximately 1% and reduced run times by a factor of 3. Based on comparisons to empirical data, it was shown that inlet particle seedings should be consistent with the local inlet mass flow rate. Furthermore, the mesh style was found to have an observable effect on cumulative particle depositions with the hexahedral solution most closely matching empirical results. Future studies are needed to assess other mesh generation options including various forms of the hybrid configuration and unstructured hexahedral meshes.

  18. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT*, especially in high-dimensional configuration spaces and in scenarios where collision-checking is expensive. PMID:27003958

  19. Controlling chaos faster.

    PubMed

    Bick, Christian; Kolodziejski, Christoph; Timme, Marc

    2014-09-01

    Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period.

  20. Agreement dynamics on interaction networks with diverse topologies

    NASA Astrophysics Data System (ADS)

    Barrat, Alain; Baronchelli, Andrea; Dall'Asta, Luca; Loreto, Vittorio

    2007-06-01

    We review the behavior of a recently introduced model of agreement dynamics, called the "Naming Game." This model describes the self-organized emergence of linguistic conventions and the establishment of simple communication systems in a population of agents with pairwise local interactions. The mechanisms of convergence towards agreement strongly depend on the network of possible interactions between the agents. In particular, the mean-field case in which all agents communicate with all the others is not efficient, since a large temporary memory is requested for the agents. On the other hand, regular lattice topologies lead to a fast local convergence but to a slow global dynamics similar to coarsening phenomena. The embedding of the agents in a small-world network represents an interesting tradeoff: a local consensus is easily reached, while the long-range links allow to bypass coarsening-like convergence. We also consider alternative adaptive strategies which can lead to faster global convergence.

  1. Implicit flux-split schemes for the Euler equations

    NASA Technical Reports Server (NTRS)

    Thomas, J. L.; Walters, R. W.; Van Leer, B.

    1985-01-01

    Recent progress in the development of implicit algorithms for the Euler equations using the flux-vector splitting method is described. Comparisons of the relative efficiency of relaxation and spatially-split approximately factored methods on a vector processor for two-dimensional flows are made. For transonic flows, the higher convergence rate per iteration of the Gauss-Seidel relaxation algorithms, which are only partially vectorizable, is amply compensated for by the faster computational rate per iteration of the approximately factored algorithm. For supersonic flows, the fully-upwind line-relaxation method is more efficient since the numerical domain of dependence is more closely matched to the physical domain of dependence. A hybrid three-dimensional algorithm using relaxation in one coordinate direction and approximate factorization in the cross-flow plane is developed and applied to a forebody shape at supersonic speeds and a swept, tapered wing at transonic speeds.

  2. An analysis of value function learning with piecewise linear control

    NASA Astrophysics Data System (ADS)

    Tutsoy, Onder; Brown, Martin

    2016-05-01

    Reinforcement learning (RL) algorithms attempt to learn optimal control actions by iteratively estimating a long-term measure of system performance, the so-called value function. For example, RL algorithms have been applied to walking robots to examine the connection between robot motion and the brain, which is known as embodied cognition. In this paper, RL algorithms are analysed using an exemplar test problem. A closed form solution for the value function is calculated and this is represented in terms of a set of basis functions and parameters, which is used to investigate parameter convergence. The value function expression is shown to have a polynomial form where the polynomial terms depend on the plant's parameters and the value function's discount factor. It is shown that the temporal difference error introduces a null space for the differenced higher order basis associated with the effects of controller switching (saturated to linear control or terminating an experiment) apart from the time of the switch. This leads to slow convergence in the relevant subspace. It is also shown that badly conditioned learning problems can occur, and this is a function of the value function discount factor and the controller switching points. Finally, a comparison is performed between the residual gradient and TD(0) learning algorithms, and it is shown that the former has a faster rate of convergence for this test problem.

  3. Learning in fully recurrent neural networks by approaching tangent planes to constraint surfaces.

    PubMed

    May, P; Zhou, E; Lee, C W

    2012-10-01

    In this paper we present a new variant of the online real time recurrent learning algorithm proposed by Williams and Zipser (1989). Whilst the original algorithm utilises gradient information to guide the search towards the minimum training error, it is very slow in most applications and often gets stuck in local minima of the search space. It is also sensitive to the choice of learning rate and requires careful tuning. The new variant adjusts weights by moving to the tangent planes to constraint surfaces. It is simple to implement and requires no parameters to be set manually. Experimental results show that this new algorithm gives significantly faster convergence whilst avoiding problems like local minima. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Frequency tracking and variable bandwidth for line noise filtering without a reference.

    PubMed

    Kelly, John W; Collinger, Jennifer L; Degenhart, Alan D; Siewiorek, Daniel P; Smailagic, Asim; Wang, Wei

    2011-01-01

    This paper presents a method for filtering line noise using an adaptive noise canceling (ANC) technique. This method effectively eliminates the sinusoidal contamination while achieving a narrower bandwidth than typical notch filters and without relying on the availability of a noise reference signal as ANC methods normally do. A sinusoidal reference is instead digitally generated and the filter efficiently tracks the power line frequency, which drifts around a known value. The filter's learning rate is also automatically adjusted to achieve faster and more accurate convergence and to control the filter's bandwidth. In this paper the focus of the discussion and the data will be electrocorticographic (ECoG) neural signals, but the presented technique is applicable to other recordings.

  5. A Model of Substitution Trajectories in Sequence Space and Long-Term Protein Evolution

    PubMed Central

    Usmanova, Dinara R.; Ferretti, Luca; Povolotskaya, Inna S.; Vlasov, Peter K.; Kondrashov, Fyodor A.

    2015-01-01

    The nature of factors governing the tempo and mode of protein evolution is a fundamental issue in evolutionary biology. Specifically, whether or not interactions between different sites, or epistasis, are important in directing the course of evolution became one of the central questions. Several recent reports have scrutinized patterns of long-term protein evolution claiming them to be compatible only with an epistatic fitness landscape. However, these claims have not yet been substantiated with a formal model of protein evolution. Here, we formulate a simple covarion-like model of protein evolution focusing on the rate at which the fitness impact of amino acids at a site changes with time. We then apply the model to the data on convergent and divergent protein evolution to test whether or not the incorporation of epistatic interactions is necessary to explain the data. We find that convergent evolution cannot be explained without the incorporation of epistasis and the rate at which an amino acid state switches from being acceptable at a site to being deleterious is faster than the rate of amino acid substitution. Specifically, for proteins that have persisted in modern prokaryotic organisms since the last universal common ancestor for one amino acid substitution approximately ten amino acid states switch from being accessible to being deleterious, or vice versa. Thus, molecular evolution can only be perceived in the context of rapid turnover of which amino acids are available for evolution. PMID:25415964

  6. Some effects of horizontal discretization on linear baroclinic and symmetric instabilities

    NASA Astrophysics Data System (ADS)

    Barham, William; Bachman, Scott; Grooms, Ian

    2018-05-01

    The effects of horizontal discretization on linear baroclinic and symmetric instabilities are investigated by analyzing the behavior of the hydrostatic Eady problem in ocean models on the B and C grids. On the C grid a spurious baroclinic instability appears at small wavelengths. This instability does not disappear as the grid scale decreases; instead, it simply moves to smaller horizontal scales. The peak growth rate of the spurious instability is independent of the grid scale as the latter decreases. It is equal to cf /√{Ri} where Ri is the balanced Richardson number, f is the Coriolis parameter, and c is a nondimensional constant that depends on the Richardson number. As the Richardson number increases c increases towards an upper bound of approximately 1/2; for large Richardson numbers the spurious instability is faster than the Eady instability. To suppress the spurious instability it is recommended to use fourth-order centered tracer advection along with biharmonic viscosity and diffusion with coefficients (Δx) 4 f /(32√{Ri}) or larger where Δx is the grid scale. On the B grid, the growth rates of baroclinic and symmetric instabilities are too small, and converge upwards towards the correct values as the grid scale decreases; no spurious instabilities are observed. In B grid models at eddy-permitting resolution, the reduced growth rate of baroclinic instability may contribute to partially-resolved eddies being too weak. On the C grid the growth rate of symmetric instability is better (larger) than on the B grid, and converges upwards towards the correct value as the grid scale decreases.

  7. On Muthen's Maximum Likelihood for Two-Level Covariance Structure Models

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Hayashi, Kentaro

    2005-01-01

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muthen's maximum likelihood (MUML) has the advantage of easier computation and faster convergence. When…

  8. Common Spatial Organization of Number and Emotional Expression: A Mental Magnitude Line

    ERIC Educational Resources Information Center

    Holmes, Kevin J.; Lourenco, Stella F.

    2011-01-01

    Converging behavioral and neural evidence suggests that numerical representations are mentally organized in left-to-right orientation. Here we show that this format of spatial organization extends to emotional expression. In Experiment 1, right-side responses became increasingly faster as number (represented by Arabic numerals) or happiness…

  9. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  10. Radiative opacities of iron using a difference algebraic converging method at temperatures near solar convection zone

    NASA Astrophysics Data System (ADS)

    Fan, Zhixiang; Sun, Weiguo; Zhang, Yi; Fu, Jia; Hu, Shide; Fan, Qunchao

    2018-03-01

    An interpolation method named difference algebraic converging method for opacity (DACMo) is proposed to study the opacities and transmissions of metal plasmas. The studies on iron plasmas at temperatures near the solar convection zone show that (1) the DACMo values reproduce most spectral structures and magnitudes of experimental opacities and transmissions. (2) The DACMo can be used to predict unknown opacities at other temperature Te' and density ρ' using the opacity constants obtained at ( Te , ρ). (3) The DACMo may predict reasonable opacities which may not be available experimentally but the least-squares (LS) method does not. (4) The computational speed of the DACMo is at least 10 times faster than that of the original difference converging method for opacity.

  11. Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis

    NASA Astrophysics Data System (ADS)

    Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.

    2014-04-01

    A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.

  12. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  13. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom

    DOE PAGES

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-03

    Here, we compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchangemore » (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium.« less

  14. Comparing generalized ensemble methods for sampling of systems with many degrees of freedom.

    PubMed

    Lincoff, James; Sasmal, Sukanya; Head-Gordon, Teresa

    2016-11-07

    We compare two standard replica exchange methods using temperature and dielectric constant as the scaling variables for independent replicas against two new corresponding enhanced sampling methods based on non-equilibrium statistical cooling (temperature) or descreening (dielectric). We test the four methods on a rough 1D potential as well as for alanine dipeptide in water, for which their relatively small phase space allows for the ability to define quantitative convergence metrics. We show that both dielectric methods are inferior to the temperature enhanced sampling methods, and in turn show that temperature cool walking (TCW) systematically outperforms the standard temperature replica exchange (TREx) method. We extend our comparisons of the TCW and TREx methods to the 5 residue met-enkephalin peptide, in which we evaluate the Kullback-Leibler divergence metric to show that the rate of convergence between two independent trajectories is faster for TCW compared to TREx. Finally we apply the temperature methods to the 42 residue amyloid-β peptide in which we find non-negligible differences in the disordered ensemble using TCW compared to the standard TREx. All four methods have been made available as software through the OpenMM Omnia software consortium (http://www.omnia.md/).

  15. A deep convolutional neural network to analyze position averaged convergent beam electron diffraction patterns.

    PubMed

    Xu, W; LeBeau, J M

    2018-05-01

    We establish a series of deep convolutional neural networks to automatically analyze position averaged convergent beam electron diffraction patterns. The networks first calibrate the zero-order disk size, center position, and rotation without the need for pretreating the data. With the aligned data, additional networks then measure the sample thickness and tilt. The performance of the network is explored as a function of a variety of variables including thickness, tilt, and dose. A methodology to explore the response of the neural network to various pattern features is also presented. Processing patterns at a rate of  ∼ 0.1 s/pattern, the network is shown to be orders of magnitude faster than a brute force method while maintaining accuracy. The approach is thus suitable for automatically processing big, 4D STEM data. We also discuss the generality of the method to other materials/orientations as well as a hybrid approach that combines the features of the neural network with least squares fitting for even more robust analysis. The source code is available at https://github.com/subangstrom/DeepDiffraction. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Phase retrieval via incremental truncated amplitude flow algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  17. Trophic specialization drives morphological evolution in sea snakes.

    PubMed

    Sherratt, Emma; Rasmussen, Arne R; Sanders, Kate L

    2018-03-01

    Viviparous sea snakes are the most rapidly speciating reptiles known, yet the ecological factors underlying this radiation are poorly understood. Here, we reconstructed dated trees for 75% of sea snake species and quantified body shape (forebody relative to hindbody girth), maximum body length and trophic diversity to examine how dietary specialization has influenced morphological diversification in this rapid radiation. We show that sea snake body shape and size are strongly correlated with the proportion of burrowing prey in the diet. Specialist predators of burrowing eels have convergently evolved a 'microcephalic' morphotype with dramatically reduced forebody relative to hindbody girth and intermediate body length. By comparison, snakes that predominantly feed on burrowing gobies are generally short-bodied and small-headed, but there is no evidence of convergent evolution. The eel specialists also exhibit faster rates of size and shape evolution compared to all other sea snakes, including those that feed on gobies. Our results suggest that trophic specialization to particular burrowing prey (eels) has invoked strong selective pressures that manifest as predictable and rapid morphological changes. Further studies are needed to examine the genetic and developmental mechanisms underlying these dramatic morphological changes and assess their role in sea snake speciation.

  18. Online selective kernel-based temporal difference learning.

    PubMed

    Chen, Xingguo; Gao, Yang; Wang, Ruili

    2013-12-01

    In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.

  19. Faster Mass Spectrometry-based Protein Inference: Junction Trees are More Efficient than Sampling and Marginalization by Enumeration

    PubMed Central

    Serang, Oliver; Noble, William Stafford

    2012-01-01

    The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862

  20. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  1. Generalized conjugate-gradient methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing

    1991-01-01

    A generalized conjugate-gradient method is used to solve the two-dimensional, compressible Navier-Stokes equations of fluid flow. The equations are discretized with an implicit, upwind finite-volume formulation. Preconditioning techniques are incorporated into the new solver to accelerate convergence of the overall iterative method. The superiority of the new solver is demonstrated by comparisons with a conventional line Gauss-Siedel Relaxation solver. Computational test results for transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (M = 6.0 shock-on-shock phenoena on a cylindrical leading edge) are presented. When applied to the transonic cascade case, the new solver is 4.4 times faster in terms of number of iterations and 3.1 times faster in terms of CPU time than the Relaxation solver. For the hypersonic shock case, the new solver is 3.0 times faster in terms of number of iterations and 2.2 times faster in terms of CPU time than the Relaxation solver.

  2. Superalgebraically convergent smoothly windowed lattice sums for doubly periodic Green functions in three-dimensional space

    PubMed Central

    Bruno, Oscar P.; Turc, Catalin; Venakides, Stephanos

    2016-01-01

    This work, part I in a two-part series, presents: (i) a simple and highly efficient algorithm for evaluation of quasi-periodic Green functions, as well as (ii) an associated boundary-integral equation method for the numerical solution of problems of scattering of waves by doubly periodic arrays of scatterers in three-dimensional space. Except for certain ‘Wood frequencies’ at which the quasi-periodic Green function ceases to exist, the proposed approach, which is based on smooth windowing functions, gives rise to tapered lattice sums which converge superalgebraically fast to the Green function—that is, faster than any power of the number of terms used. This is in sharp contrast to the extremely slow convergence exhibited by the lattice sums in the absence of smooth windowing. (The Wood-frequency problem is treated in part II.) This paper establishes rigorously the superalgebraic convergence of the windowed lattice sums. A variety of numerical results demonstrate the practical efficiency of the proposed approach. PMID:27493573

  3. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  4. Degradation of phenanthrene by Burkholderia sp. C3: initial 1,2- and 3,4-dioxygenation and meta- and ortho-cleavage of naphthalene-1,2-diol.

    PubMed

    Seo, Jong-Su; Keum, Young-Soo; Hu, Yuting; Lee, Sung-Eun; Li, Qing X

    2007-02-01

    Burkholderia sp. C3 was isolated from a polycyclic aromatic hydrocarbon (PAH)-contaminated site in Hilo, Hawaii, USA, and studied for its degradation of phenanthrene as a sole carbon source. The initial 3,4-C dioxygenation was faster than 1,2-C dioxygenation in the first 3-day culture. However, 1-hydroxy-2-naphthoic acid derived from 3,4-C dioxygenation degraded much slower than 2-hydroxy-1-naphthoic acid derived from 1,2-C dioxygenation. Slow degradation of 1-hydroxy-2-naphthoic acid relative to 2-hydroxy-1-naphthoic acid may trigger 1,2-C dioxygenation faster after 3 days of culture. High concentrations of 5,6- and 7,8-benzocoumarins indicated that meta-cleavage was the major degradation mechanism of phenanthrene-1,2- and -3,4-diols. Separate cultures with 2-hydroxy-1-naphthoic acid and 1-hydroxy-2-naphthoic acid showed that the degradation rate of the former to naphthalene-1,2-diol was much faster than that of the latter. The two upper metabolic pathways of phenanthrene are converged into naphthalene-1,2-diol that is further metabolized to 2-carboxycinnamic acid and 2-hydroxybenzalpyruvic acid by ortho- and meta-cleavages, respectively. Transformation of naphthalene-1,2-diol to 2-carboxycinnamic acid by this strain represents the first observation of ortho-cleavage of two rings-PAH-diols by a Gram-negative species.

  5. Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems

    NASA Astrophysics Data System (ADS)

    Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao

    2018-09-01

    Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.

  6. Evaluation of enhanced sampling provided by accelerated molecular dynamics with Hamiltonian replica exchange methods.

    PubMed

    Roe, Daniel R; Bergonzo, Christina; Cheatham, Thomas E

    2014-04-03

    Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency.

  7. Gauss Seidel-type methods for energy states of a multi-component Bose Einstein condensate

    NASA Astrophysics Data System (ADS)

    Chang, Shu-Ming; Lin, Wen-Wei; Shieh, Shih-Feng

    2005-01-01

    In this paper, we propose two iterative methods, a Jacobi-type iteration (JI) and a Gauss-Seidel-type iteration (GSI), for the computation of energy states of the time-independent vector Gross-Pitaevskii equation (VGPE) which describes a multi-component Bose-Einstein condensate (BEC). A discretization of the VGPE leads to a nonlinear algebraic eigenvalue problem (NAEP). We prove that the GSI method converges locally and linearly to a solution of the NAEP if and only if the associated minimized energy functional problem has a strictly local minimum. The GSI method can thus be used to compute ground states and positive bound states, as well as the corresponding energies of a multi-component BEC. Numerical experience shows that the GSI converges much faster than JI and converges globally within 10-20 steps.

  8. Optimal convolution SOR acceleration of waveform relaxation with application to semiconductor device simulation

    NASA Technical Reports Server (NTRS)

    Reichelt, Mark

    1993-01-01

    In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.

  9. Fast sparse recovery and coherence factor weighting in optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis

    2017-03-01

    Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.

  10. A pheromone-rate-based analysis on the convergence time of ACO algorithm.

    PubMed

    Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng

    2009-08-01

    Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.

  11. Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    2017-10-01

    Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.

  12. Detection of Foreign Matter in Transfusion Solution Based on Gaussian Background Modeling and an Optimized BP Neural Network

    PubMed Central

    Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng

    2014-01-01

    This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581

  13. Gauge invariant spectral Cauchy characteristic extraction

    NASA Astrophysics Data System (ADS)

    Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey

    2015-12-01

    We present gauge invariant spectral Cauchy characteristic extraction. We compare gravitational waveforms extracted from a head-on black hole merger simulated in two different gauges by two different codes. We show rapid convergence, demonstrating both gauge invariance of the extraction algorithm and consistency between the legacy Pitt null code and the much faster spectral Einstein code (SpEC).

  14. Convergence of broad-scale migration strategies in terrestrial birds.

    PubMed

    La Sorte, Frank A; Fink, Daniel; Hochachka, Wesley M; Kelling, Steve

    2016-01-27

    Migration is a common strategy used by birds that breed in seasonal environments. Selection for greater migration efficiency is likely to be stronger for terrestrial species whose migration strategies require non-stop transoceanic crossings. If multiple species use the same transoceanic flyway, then we expect the migration strategies of these species to converge geographically towards the most optimal solution. We test this by examining population-level migration trajectories within the Western Hemisphere for 118 migratory species using occurrence information from eBird. Geographical convergence of migration strategies was evident within specific terrestrial regions where geomorphological features such as mountains or isthmuses constrained overland migration. Convergence was also evident for transoceanic migrants that crossed the Gulf of Mexico or Atlantic Ocean. Here, annual population-level movements were characterized by clockwise looped trajectories, which resulted in faster but more circuitous journeys in the spring and more direct journeys in the autumn. These findings suggest that the unique constraints and requirements associated with transoceanic migration have promoted the spatial convergence of migration strategies. The combination of seasonal atmospheric and environmental conditions that has facilitated the use of similar broad-scale migration strategies may be especially prone to disruption under climate and land-use change. © 2016 The Author(s).

  15. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  16. Modelling and finite-time stability analysis of psoriasis pathogenesis

    NASA Astrophysics Data System (ADS)

    Oza, Harshal B.; Pandey, Rakesh; Roper, Daniel; Al-Nuaimi, Yusur; Spurgeon, Sarah K.; Goodfellow, Marc

    2017-08-01

    A new systems model of psoriasis is presented and analysed from the perspective of control theory. Cytokines are treated as actuators to the plant model that govern the cell population under the reasonable assumption that cytokine dynamics are faster than the cell population dynamics. The analysis of various equilibria is undertaken based on singular perturbation theory. Finite-time stability and stabilisation have been studied in various engineering applications where the principal paradigm uses non-Lipschitz functions of the states. A comprehensive study of the finite-time stability properties of the proposed psoriasis dynamics is carried out. It is demonstrated that the dynamics are finite-time convergent to certain equilibrium points rather than asymptotically or exponentially convergent. This feature of finite-time convergence motivates the development of a modified version of the Michaelis-Menten function, frequently used in biology. This framework is used to model cytokines as fast finite-time actuators.

  17. Entropy and gravity concepts as new methodological indexes to investigate technological convergence: patent network-based approach.

    PubMed

    Cho, Yongrae; Kim, Minsung

    2014-01-01

    The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy.

  18. Ozone levels in European and USA cities are increasing more than at rural sites, while peak values are decreasing.

    PubMed

    Paoletti, Elena; De Marco, Alessandra; Beddows, David C S; Harrison, Roy M; Manning, William J

    2014-09-01

    Ground-level ozone (O3) levels are usually lower in urban centers than nearby rural sites. To compare trends in O3 levels during the period 1990-2010, we obtained monitoring data from paired urban and rural sites from the European Environment Agency and the US Environmental Protection Agency. Ozone peaks decreased at both station types, with no significant differences between urban and rural stations. Ozone annual averages increased at both urban and rural sites, with a faster rate of increase for urban centers. The overall trend was for convergence between urban and rural O3 data. Ozone levels exceeded the criteria established for the protection of human and vegetation health at both urban and rural sites. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Numerical simulation of three-dimensional transonic turbulent projectile aerodynamics by TVD schemes

    NASA Technical Reports Server (NTRS)

    Shiau, Nae-Haur; Hsu, Chen-Chi; Chyu, Wei-Jao

    1989-01-01

    The two-dimensional symmetric TVD scheme proposed by Yee has been extended to and investigated for three-dimensional thin-layer Navier-Stokes simulation of complex aerodynamic problems. An existing three-dimensional Navier-stokes code based on the beam and warming algorithm is modified to provide an option of using the TVD algorithm and the flow problem considered is a transonic turbulent flow past a projectile with sting at ten-degree angle of attack. Numerical experiments conducted for three flow cases, free-stream Mach numbers of 0.91, 0.96 and 1.20 show that the symmetric TVD algorithm can provide surface pressure distribution in excellent agreement with measured data; moreover, the rate of convergence to attain a steady state solution is about two times faster than the original beam and warming algorithm.

  20. Photovoltaic Inverter Controllers Seeking AC Optimal Power Flow Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    This paper considers future distribution networks featuring inverter-interfaced photovoltaic (PV) systems, and addresses the synthesis of feedback controllers that seek real- and reactive-power inverter setpoints corresponding to AC optimal power flow (OPF) solutions. The objective is to bridge the temporal gap between long-term system optimization and real-time inverter control, and enable seamless PV-owner participation without compromising system efficiency and stability. The design of the controllers is grounded on a dual ..epsilon..-subgradient method, while semidefinite programming relaxations are advocated to bypass the non-convexity of AC OPF formulations. Global convergence of inverter output powers is analytically established for diminishing stepsize rules formore » cases where: i) computational limits dictate asynchronous updates of the controller signals, and ii) inverter reference inputs may be updated at a faster rate than the power-output settling time.« less

  1. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  2. Cerebellar Purkinje Cells Generate Highly Correlated Spontaneous Slow-Rate Fluctuations.

    PubMed

    Cao, Ying; Liu, Yu; Jaeger, Dieter; Heck, Detlef H

    2017-01-01

    Cerebellar Purkinje cells (PC) fire action potentials at high, sustained rates. Changes in spike rate that last a few tens of milliseconds encode sensory and behavioral events. Here we investigated spontaneous fluctuations of PC simple spike rate at a slow time scale of the order of 1 s. Simultaneous recordings from pairs of PCs that were aligned either along the sagittal or transversal axis of the cerebellar cortex revealed that simple spike rate fluctuations at the 1 s time scale were highly correlated. Each pair of PCs had either a predominantly positive or negative slow-rate correlation, with negative correlations observed only in PC pairs aligned along the transversal axis. Slow-rate correlations were independent of faster rate changes that were correlated with fluid licking behavior. Simultaneous recordings from PCs and cerebellar nuclear (CN) neurons showed that slow-rate fluctuations in PC and CN activity were also highly correlated, but their correlations continually alternated between periods of positive and negative correlation. The functional significance of this new aspect of cerebellar spike activity remains to be determined. Correlated slow-rate fluctuations seem too slow to be involved in the real-time control of ongoing behavior. However, slow-rate fluctuations of PCs converging on the same CN neuron are likely to modulate the excitability of the CN neuron, thus introduce a possible slow modulation of cerebellar output activity.

  3. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method

    PubMed Central

    2016-01-01

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters. PMID:27959559

  4. Attitude Sensor and Gyro Calibration for Messenger

    NASA Technical Reports Server (NTRS)

    O'Shaughnessy, Daniel; Pittelkau, Mark E.

    2007-01-01

    The Redundant Inertial Measurement Unit Attitude Determination/Calibration (RADICAL(TM)) filter was used to estimate star tracker and gyro calibration parameters using MESSENGER telemetry data from three calibration events. We present an overview of the MESSENGER attitude sensors and their configuration is given, the calibration maneuvers are described, the results are compared with previous calibrations, and variations and trends in the estimated calibration parameters are examined. The warm restart and covariance bump features of the RADICAL(TM) filter were used to estimate calibration parameters from two disjoint telemetry streams. Results show that the calibration parameters converge faster with much less transient variation during convergence than when the filter is cold-started at the start of each telemetry stream.

  5. Research on WNN Modeling for Gold Price Forecasting Based on Improved Artificial Bee Colony Algorithm

    PubMed Central

    2014-01-01

    Gold price forecasting has been a hot issue in economics recently. In this work, wavelet neural network (WNN) combined with a novel artificial bee colony (ABC) algorithm is proposed for this gold price forecasting issue. In this improved algorithm, the conventional roulette selection strategy is discarded. Besides, the convergence statuses in a previous cycle of iteration are fully utilized as feedback messages to manipulate the searching intensity in a subsequent cycle. Experimental results confirm that this new algorithm converges faster than the conventional ABC when tested on some classical benchmark functions and is effective to improve modeling capacity of WNN regarding the gold price forecasting scheme. PMID:24744773

  6. Recognition of digital characteristics based new improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Xu, Guoqiang; Lin, Zihao

    2017-08-01

    In the field of digital signal processing, Estimating the characteristics of signal modulation parameters is an significant research direction. The paper determines the set of eigenvalue which can show the difference of the digital signal modulation based on the deep research of the new improved genetic algorithm. Firstly take them as the best gene pool; secondly, The best gene pool will be changed in the genetic evolvement by selecting, overlapping and eliminating each other; Finally, Adapting the strategy of futher enhance competition and punishment to more optimizer the gene pool and ensure each generation are of high quality gene. The simulation results show that this method not only has the global convergence, stability and faster convergence speed.

  7. Convergence in Patient-Therapist Therapeutic Alliance Ratings and Its Relation to Outcome in Chronic Depression Treatment

    PubMed Central

    Laws, Holly B.; Constantino, Michael J.; Sayer, Aline G.; Klein, Daniel N.; Kocsis, James H.; Manber, Rachel; Markowitz, John C.; Rothbaum, Barbara O.; Steidtmann, Dana; Thase, Michael E.; Arnow, Bruce A.

    2016-01-01

    Objective This study tested whether discrepancy between patients' and therapists' ratings of the therapeutic alliance, as well as convergence in their alliance ratings over time, predicted outcome in chronic depression treatment. Method Data derived from a controlled trial of partial or non-responders to open-label pharmacotherapy subsequently randomized to 12 weeks of algorithm-driven pharmacotherapy alone or pharmacotherapy plus psychotherapy (Kocsis et al., 2009). The current study focused on the psychotherapy conditions (N = 357). Dyadic multilevel modeling was used to assess alliance discrepancy and alliance convergence over time as predictors of two depression measures: one pharmacotherapist-rated (Quick Inventory of Depressive Symptoms-Clinician; QIDS-C), the other blind interviewer-rated (Hamilton Rating Scale for Depression; HAMD). Results Patients' and therapists' alliance ratings became more similar, or convergent, over the course of psychotherapy. Higher alliance convergence was associated with greater reductions in QIDS-C depression across psychotherapy. Alliance convergence was not significantly associated with declines in HAMD depression; however, greater alliance convergence was related to lower HAMD scores at 3-month follow-up. Conclusions The results partially support the hypothesis that increasing patient-therapist consensus on alliance quality during psychotherapy may improve treatment and longer-term outcomes. PMID:26829714

  8. TH-E-17A-06: Anatomical-Adaptive Compressed Sensing (AACS) Reconstruction for Thoracic 4-Dimensional Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, C; Kipritidis, J; OBrien, R

    2014-06-15

    Purpose: The Feldkamp-Davis-Kress (FDK) algorithm currently used for clinical thoracic 4-dimensional (4D) cone-beam CT (CBCT) reconstruction suffers from noise and streaking artifacts due to projection under-sampling. Compressed sensing theory enables reconstruction of under-sampled datasets via total-variation (TV) minimization, but TV-minimization algorithms such as adaptive-steepest-descent-projection-onto-convex-sets (ASD-POCS) often converge slowly and are prone to over-smoothing anatomical details. These disadvantages can be overcome by incorporating general anatomical knowledge via anatomy segmentation. Based on this concept, we have developed an anatomical-adaptive compressed sensing (AACS) algorithm for thoracic 4D-CBCT reconstruction. Methods: AACS is based on the ASD-POCS framework, where each iteration consists of a TV-minimizationmore » step and a data fidelity constraint step. Prior to every AACS iteration, four major thoracic anatomical structures - soft tissue, lungs, bony anatomy, and pulmonary details - were segmented from the updated solution image. Based on the segmentation, an anatomical-adaptive weighting was applied to the TV-minimization step, so that TV-minimization was enhanced at noisy/streaky regions and suppressed at anatomical structures of interest. The image quality and convergence speed of AACS was compared to conventional ASD-POCS using an XCAT digital phantom and a patient scan. Results: For the XCAT phantom, the AACS image represented the ground truth better than the ASD-POCS image, giving a higher structural similarity index (0.93 vs. 0.84) and lower absolute difference (1.1*10{sup 4} vs. 1.4*10{sup 4}). For the patient case, while both algorithms resulted in much less noise and streaking than FDK, the AACS image showed considerably better contrast and sharpness of the vessels, tumor, and fiducial marker than the ASD-POCS image. In addition, AACS converged over 50% faster than ASD-POCS in both cases. Conclusions: The proposed AACS algorithm was shown to reconstruct thoracic 4D-CBCT images more accurately and with faster convergence compared to ASD-POCS. The superior image quality and rapid convergence makes AACS promising for future clinical use.« less

  9. Evaluation of Enhanced Sampling Provided by Accelerated Molecular Dynamics with Hamiltonian Replica Exchange Methods

    PubMed Central

    2015-01-01

    Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency. PMID:24625009

  10. Present-day stress field in subduction zones: Insights from 3D viscoelastic models and data

    NASA Astrophysics Data System (ADS)

    Petricca, Patrizio; Carminati, Eugenio

    2016-01-01

    3D viscoelastic FE models were performed to investigate the impact of geometry and kinematics on the lithospheric stress in convergent margins. Generic geometries were designed in order to resemble natural subduction. Our model predictions mirror the results of previous 2D models concerning the effects of lithosphere-mantle relative flow on stress regimes, and allow a better understanding of the lateral variability of the stress field. In particular, in both upper and lower plates, stress axes orientations depend on the adopted geometry and axes rotations occur following the trench shape. Generally stress axes are oriented perpendicular or parallel to the trench, with the exception of the slab lateral tips where rotations occur. Overall compression results in the upper plate when convergence rate is faster than mantle flow rate, suggesting a major role for convergence. In the slab, along-strike tension occurs at intermediate and deeper depths (> 100 km) in case of mantle flow sustaining the sinking lithosphere and slab convex geometry facing mantle flow or in case of opposing mantle flow and slab concave geometry facing mantle flow. Along-strike compression is predicted in case of sustaining mantle flow and concave slabs or in case of opposing mantle flow and convex slabs. The slab stress field is thus controlled by the direction of impact of mantle flow onto the slab and by slab longitudinal curvature. Slab pull produces not only tension in the bending region of subducted plate but also compression where upper and lower plates are coupled. A qualitative comparison between results and data in selected subductions indicates good match for South America, Mariana and Tonga-Kermadec subductions. Discrepancies, as for Sumatra-Java, emerge due to missing geometric (e.g., occurrence of fault systems and local changes in the orientation of plate boundaries) and rheological (e.g., plasticity associated with slab bending, anisotropy) complexities in the models.

  11. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  12. Optical Oversampled Analog-to-Digital Conversion

    DTIC Science & Technology

    1992-06-29

    hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions

  13. Regional Growth and income convergence in the western black belt counties of Alabama: evidence from census block group data

    Treesearch

    Buddhi Gyawali; Rory Fraser; James Bukenya; John Schelhas

    2010-01-01

    This paper examines the effects of growth in African Ameriocan population, employment, and human capital on growth in per capita income at the census block group (CBG) level using ordinary least square and spatial reqression models. The results indicate the presence of conditional incaom conbergence between 1980 and 2000 with poorer CBGs growing faster than the...

  14. A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems

    NASA Astrophysics Data System (ADS)

    Chan, Tony; Szeto, Tedd

    1994-03-01

    We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.

  15. PAPR reduction based on tone reservation scheme for DCO-OFDM indoor visible light communications.

    PubMed

    Bai, Jurong; Li, Yong; Yi, Yang; Cheng, Wei; Du, Huimin

    2017-10-02

    High peak-to-average power ratio (PAPR) leads to out-of-band power and in-band distortion in the direct current-biased optical orthogonal frequency division multiplexing (DCO-OFDM) systems. In order to effectively reduce the PAPR with faster convergence and lower complexity, this paper proposes a tone reservation based scheme, which is the combination of the signal-to-clipping noise ratio (SCR) procedure and the least squares approximation (LSA) procedure. In the proposed scheme, the transmitter of the DCO-OFDM indoor visible light communication (VLC) system is designed to transform the PAPR reduced signal into real-valued positive OFDM signal without doubling the transmission bandwidth. Moreover, the communication distance and the light emitting diode (LED) irradiance angle are taking into consideration in the evaluation of the system bit error rate (BER). The PAPR reduction efficiency of the proposed scheme is remarkable for DCO-OFDM indoor VLC systems.

  16. A neural network controller for automated composite manufacturing

    NASA Technical Reports Server (NTRS)

    Lichtenwalner, Peter F.

    1994-01-01

    At McDonnell Douglas Aerospace (MDA), an artificial neural network based control system has been developed and implemented to control laser heating for the fiber placement composite manufacturing process. This neurocontroller learns an approximate inverse model of the process on-line to provide performance that improves with experience and exceeds that of conventional feedback control techniques. When untrained, the control system behaves as a proportional plus integral (PI) controller. However after learning from experience, the neural network feedforward control module provides control signals that greatly improve temperature tracking performance. Faster convergence to new temperature set points and reduced temperature deviation due to changing feed rate have been demonstrated on the machine. A Cerebellar Model Articulation Controller (CMAC) network is used for inverse modeling because of its rapid learning performance. This control system is implemented in an IBM compatible 386 PC with an A/D board interface to the machine.

  17. The edge- and basal-plane-specific electrochemistry of a single-layer graphene sheet

    PubMed Central

    Yuan, Wenjing; Zhou, Yu; Li, Yingru; Li, Chun; Peng, Hailin; Zhang, Jin; Liu, Zhongfan; Dai, Liming; Shi, Gaoquan

    2013-01-01

    Graphene has a unique atom-thick two-dimensional structure and excellent properties, making it attractive for a variety of electrochemical applications, including electrosynthesis, electrochemical sensors or electrocatalysis, and energy conversion and storage. However, the electrochemistry of single-layer graphene has not yet been well understood, possibly due to the technical difficulties in handling individual graphene sheet. Here, we report the electrochemical behavior at single-layer graphene-based electrodes, comparing the basal plane of graphene to its edge. The graphene edge showed 4 orders of magnitude higher specific capacitance, much faster electron transfer rate and stronger electrocatalytic activity than those of graphene basal plane. A convergent diffusion effect was observed at the sub-nanometer thick graphene edge-electrode to accelerate the electrochemical reactions. Coupling with the high conductivity of a high-quality graphene basal plane, graphene edge is an ideal electrode for electrocatalysis and for the storage of capacitive charges. PMID:23896697

  18. Numerical analysis of spectral properties of coupled oscillator Schroedinger operators. I - Single and double well anharmonic oscillators

    NASA Technical Reports Server (NTRS)

    Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.

    1981-01-01

    Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.

  19. Joint polarization tracking and channel equalization based on radius-directed linear Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qun; Yang, Yanfu; Zhong, Kangping; Liu, Jie; Wu, Xiong; Yao, Yong

    2018-01-01

    We propose a joint polarization tracking and channel equalization scheme based on radius-directed linear Kalman filter (RD-LKF) by introducing the butterfly finite-impulse-response (FIR) filter in our previously proposed RD-LKF method. Along with the fast polarization tracking, it can also simultaneously compensate the inter-symbol interference (ISI) effects including residual chromatic dispersion and polarization mode dispersion. Compared with the conventional radius-directed equalizer (RDE) algorithm, it is demonstrated experimentally that three times faster convergence speed, one order of magnitude better tracking capability, and better BER performance is obtained in polarization division multiplexing 16 quadrature amplitude modulation system. Besides, the influences of the algorithm parameters on the convergence and the tracking performance are investigated by numerical simulation.

  20. Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations

    PubMed Central

    2013-01-01

    We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244

  1. Acoustic echo cancellation for full-duplex voice transmission on fading channels

    NASA Technical Reports Server (NTRS)

    Park, Sangil; Messer, Dion D.

    1990-01-01

    This paper discusses the implementation of an adaptive acoustic echo canceler for a hands-free cellular phone operating on a fading channel. The adaptive lattice structure, which is particularly known for faster convergence relative to the conventional tapped-delay-line (TDL) structure, is used in the initialization stage. After convergence, the lattice coefficients are converted into the coefficients for the TDL structure which can accommodate a larger number of taps in real-time operation due to its computational simplicity. The conversion method of the TDL coefficients from the lattice coefficients is derived and the DSP56001 assembly code for the lattice and TDL structure is included, as well as simulation results and the schematic diagram for the hardware implementation.

  2. Demultiplexing based on frequency-domain joint decision MMA for MDM system

    NASA Astrophysics Data System (ADS)

    Caili, Gong; Li, Li; Guijun, Hu

    2016-06-01

    In this paper, we propose a demultiplexing method based on frequency-domain joint decision multi-modulus algorithm (FD-JDMMA) for mode division multiplexing (MDM) system. The performance of FD-JDMMA is compared with frequency-domain multi-modulus algorithm (FD-MMA) and frequency-domain least mean square (FD-LMS) algorithm. The simulation results show that FD-JDMMA outperforms FD-MMA in terms of BER and convergence speed in the cases of mQAM (m=4, 16 and 64) formats. And it is also demonstrated that FD-JDMMA achieves better BER performance and converges faster than FD-LMS in the cases of 16QAM and 64QAM. Furthermore, FD-JDMMA maintains similar computational complexity as the both equalization algorithms.

  3. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks

    PubMed Central

    Abba, Sani; Lee, Jeong-A

    2015-01-01

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236

  4. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    PubMed

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  5. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  6. Widespread neural oscillations in the delta band dissociate rule convergence from rule divergence during creative idea generation.

    PubMed

    Boot, Nathalie; Baas, Matthijs; Mühlfeld, Elisabeth; de Dreu, Carsten K W; van Gaal, Simon

    2017-09-01

    Critical to creative cognition and performance is both the generation of multiple alternative solutions in response to open-ended problems (divergent thinking) and a series of cognitive operations that converges on the correct or best possible answer (convergent thinking). Although the neural underpinnings of divergent and convergent thinking are still poorly understood, several electroencephalography (EEG) studies point to differences in alpha-band oscillations between these thinking modes. We reason that, because most previous studies employed typical block designs, these pioneering findings may mainly reflect the more sustained aspects of creative processes that extend over longer time periods, and that still much is unknown about the faster-acting neural mechanisms that dissociate divergent from convergent thinking during idea generation. To this end, we developed a new event-related paradigm, in which we measured participants' tendency to implicitly follow a rule set by examples, versus breaking that rule, during the generation of novel names for specific categories (e.g., pasta, planets). This approach allowed us to compare the oscillatory dynamics of rule convergent and rule divergent idea generation and at the same time enabled us to measure spontaneous switching between these thinking modes on a trial-to-trial basis. We found that, relative to more systematic, rule convergent thinking, rule divergent thinking was associated with widespread decreases in delta band activity. Therefore, this study contributes to advancing our understanding of the neural underpinnings of creativity by addressing some methodological challenges that neuroscientific creativity research faces. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2017-12-15

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  8. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    NASA Astrophysics Data System (ADS)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  9. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  10. Entropy and Gravity Concepts as New Methodological Indexes to Investigate Technological Convergence: Patent Network-Based Approach

    PubMed Central

    Cho, Yongrae; Kim, Minsung

    2014-01-01

    The volatility and uncertainty in the process of technological developments are growing faster than ever due to rapid technological innovations. Such phenomena result in integration among disparate technology fields. At this point, it is a critical research issue to understand the different roles and the propensity of each element technology for technological convergence. In particular, the network-based approach provides a holistic view in terms of technological linkage structures. Furthermore, the development of new indicators based on network visualization can reveal the dynamic patterns among disparate technologies in the process of technological convergence and provide insights for future technological developments. This research attempts to analyze and discover the patterns of the international patent classification codes of the United States Patent and Trademark Office's patent data in printed electronics, which is a representative technology in the technological convergence process. To this end, we apply the physical idea as a new methodological approach to interpret technological convergence. More specifically, the concepts of entropy and gravity are applied to measure the activities among patent citations and the binding forces among heterogeneous technologies during technological convergence. By applying the entropy and gravity indexes, we could distinguish the characteristic role of each technology in printed electronics. At the technological convergence stage, each technology exhibits idiosyncratic dynamics which tend to decrease technological differences and heterogeneity. Furthermore, through nonlinear regression analysis, we have found the decreasing patterns of disparity over a given total period in the evolution of technological convergence. This research has discovered the specific role of each element technology field and has consequently identified the co-evolutionary patterns of technological convergence. These new findings on the evolutionary patterns of technological convergence provide some implications for engineering and technology foresight research, as well as for corporate strategy and technology policy. PMID:24914959

  11. An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.

    PubMed

    Mengoni, M; Ponthot, J P

    2015-01-01

    The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.

  12. Improved artificial bee colony algorithm for wavefront sensor-less system in free space optical communication

    NASA Astrophysics Data System (ADS)

    Niu, Chaojun; Han, Xiang'e.

    2015-10-01

    Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.

  13. SOLAR MERIDIONAL FLOW IN THE SHALLOW INTERIOR DURING THE RISING PHASE OF CYCLE 24

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Junwei; Bogart, R. S.; Kosovichev, A. G.

    2014-07-01

    Solar subsurface zonal- and meridional-flow profiles during the rising phase of solar cycle 24 are studied using the time-distance helioseismology technique. The faster zonal bands in the torsional-oscillation pattern show strong hemispheric asymmetries and temporal variations in both width and speed. The faster band in the northern hemisphere is located closer to the equator than the band in the southern hemisphere and migrates past the equator when the magnetic activity in the southern hemisphere is reaching maximum. The meridional-flow speed decreases substantially with the increase of magnetic activity, and the flow profile shows two zonal structures in each hemisphere. Themore » residual meridional flow, after subtracting a mean meridional-flow profile, converges toward the activity belts and shows faster and slower bands like the torsional-oscillation pattern. More interestingly, the meridional-flow speed above latitude 30° shows an anti-correlation with the poleward-transporting magnetic flux, slower when the following-polarity flux is transported and faster when the leading-polarity flux is transported. It is expected that this phenomenon slows the process of magnetic cancellation and polarity reversal in high-latitude areas.« less

  14. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  15. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  16. Naming Game with Multiple Hearers

    NASA Astrophysics Data System (ADS)

    Li, Bing; Chen, Guanrong; Chow, Tommy W. S.

    2013-05-01

    A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.

  17. Cogging effect minimization in PMSM position servo system using dual high-order periodic adaptive learning compensation.

    PubMed

    Luo, Ying; Chen, Yangquan; Pi, Youguo

    2010-10-01

    Cogging effect which can be treated as a type of position-dependent periodic disturbance, is a serious disadvantage of the permanent magnetic synchronous motor (PMSM). In this paper, based on a simulation system model of PMSM position servo control, the cogging force, viscous friction, and applied load in the real PMSM control system are considered and presented. A dual high-order periodic adaptive learning compensation (DHO-PALC) method is proposed to minimize the cogging effect on the PMSM position and velocity servo system. In this DHO-PALC scheme, more than one previous periods stored information of both the composite tracking error and the estimate of the cogging force is used for the control law updating. Asymptotical stability proof with the proposed DHO-PALC scheme is presented. Simulation is implemented on the PMSM servo system model to illustrate the proposed method. When the constant speed reference is applied, the DHO-PALC can achieve a faster learning convergence speed than the first-order periodic adaptive learning compensation (FO-PALC). Moreover, when the designed reference signal changes periodically, the proposed DHO-PALC can obtain not only faster convergence speed, but also much smaller final error bound than the FO-PALC. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  19. Natural learning in NLDA networks.

    PubMed

    González, Ana; Dorronsoro, José R

    2007-07-01

    Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

  20. Performance improvement of robots using a learning control scheme

    NASA Technical Reports Server (NTRS)

    Krishna, Ramuhalli; Chiang, Pen-Tai; Yang, Jackson C. S.

    1987-01-01

    Many applications of robots require that the same task be repeated a number of times. In such applications, the errors associated with one cycle are also repeated every cycle of the operation. An off-line learning control scheme is used here to modify the command function which would result in smaller errors in the next operation. The learning scheme is based on a knowledge of the errors and error rates associated with each cycle. Necessary conditions for the iterative scheme to converge to zero errors are derived analytically considering a second order servosystem model. Computer simulations show that the errors are reduced at a faster rate if the error rate is included in the iteration scheme. The results also indicate that the scheme may increase the magnitude of errors if the rate information is not included in the iteration scheme. Modification of the command input using a phase and gain adjustment is also proposed to reduce the errors with one attempt. The scheme is then applied to a computer model of a robot system similar to PUMA 560. Improved performance of the robot is shown by considering various cases of trajectory tracing. The scheme can be successfully used to improve the performance of actual robots within the limitations of the repeatability and noise characteristics of the robot.

  1. A novel constructive-optimizer neural network for the traveling salesman problem.

    PubMed

    Saadatmand-Tarzjan, Mahdi; Khademi, Morteza; Akbarzadeh-T, Mohammad-R; Moghaddam, Hamid Abrishami

    2007-08-01

    In this paper, a novel constructive-optimizer neural network (CONN) is proposed for the traveling salesman problem (TSP). CONN uses a feedback structure similar to Hopfield-type neural networks and a competitive training algorithm similar to the Kohonen-type self-organizing maps (K-SOMs). Consequently, CONN is composed of a constructive part, which grows the tour and an optimizer part to optimize it. In the training algorithm, an initial tour is created first and introduced to CONN. Then, it is trained in the constructive phase for adding a number of cities to the tour. Next, the training algorithm switches to the optimizer phase for optimizing the current tour by displacing the tour cities. After convergence in this phase, the training algorithm switches to the constructive phase anew and is continued until all cities are added to the tour. Furthermore, we investigate a relationship between the number of TSP cities and the number of cities to be added in each constructive phase. CONN was tested on nine sets of benchmark TSPs from TSPLIB to demonstrate its performance and efficiency. It performed better than several typical Neural networks (NNs), including KNIES_TSP_Local, KNIES_TSP_Global, Budinich's SOM, Co-Adaptive Net, and multivalued Hopfield network as wall as computationally comparable variants of the simulated annealing algorithm, in terms of both CPU time and accuracy. Furthermore, CONN converged considerably faster than expanding SOM and evolved integrated SOM and generated shorter tours compared to KNIES_DECOMPOSE. Although CONN is not yet comparable in terms of accuracy with some sophisticated computationally intensive algorithms, it converges significantly faster than they do. Generally speaking, CONN provides the best compromise between CPU time and accuracy among currently reported NNs for TSP.

  2. Implicit solution of Navier-Stokes equations on staggered curvilinear grids using a Newton-Krylov method with a novel analytical Jacobian.

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Asgharzadeh, Hafez

    2015-11-01

    Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.

  3. Measuring the global information society - explaining digital inequality by economic level and education standard

    NASA Astrophysics Data System (ADS)

    Ünver, H.

    2017-02-01

    A main focus of this research paper is to investigate on the explanation of the ‘digital inequality’ or ‘digital divide’ by economic level and education standard of about 150 countries worldwide. Inequality regarding GDP per capita, literacy and the so-called UN Education Index seem to be important factors affecting ICT usage, in particular Internet penetration, mobile phone usage and also mobile Internet services. Empirical methods and (multivariate) regression analysis with linear and non-linear functions are useful methods to measure some crucial factors of a country or culture towards becoming information and knowledge based society. Overall, the study concludes that the convergence regarding ICT usage proceeds worldwide faster than the convergence in terms of economic wealth and education in general. The results based on a large data analysis show that the digital divide is declining over more than a decade between 2000 and 2013, since more people worldwide use mobile phones and the Internet. But a high digital inequality explained to a significant extent by the functional relation between technology penetration rates, education level and average income still exists. Furthermore it supports the actions of countries at UN/G20/OECD level for providing ICT access to all people for a more balanced world in context of sustainable development by postulating that policymakers need to promote comprehensive education worldwide by means of using ICT.

  4. Three-dimensional full waveform inversion of short-period teleseismic wavefields based upon the SEM-DSM hybrid method

    NASA Astrophysics Data System (ADS)

    Monteiller, Vadim; Chevrot, Sébastien; Komatitsch, Dimitri; Wang, Yi

    2015-08-01

    We present a method for high-resolution imaging of lithospheric structures based on full waveform inversion of teleseismic waveforms. We model the propagation of seismic waves using our recently developed direct solution method/spectral-element method hybrid technique, which allows us to simulate the propagation of short-period teleseismic waves through a regional 3-D model. We implement an iterative quasi-Newton method based upon the L-BFGS algorithm, where the gradient of the misfit function is computed using the adjoint-state method. Compared to gradient or conjugate-gradient methods, the L-BFGS algorithm has a much faster convergence rate. We illustrate the potential of this method on a synthetic test case that consists of a crustal model with a crustal discontinuity at 25 km depth and a sharp Moho jump. This model contains short- and long-wavelength heterogeneities along the lateral and vertical directions. The iterative inversion starts from a smooth 1-D model derived from the IASP91 reference Earth model. We invert both radial and vertical component waveforms, starting from long-period signals filtered at 10 s and gradually decreasing the cut-off period down to 1.25 s. This multiscale algorithm quickly converges towards a model that is very close to the true model, in contrast to inversions involving short-period waveforms only, which always get trapped into a local minimum of the cost function.

  5. Convergence and rate analysis of neural networks for sparse approximation.

    PubMed

    Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J

    2012-09-01

    We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.

  6. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  7. Improving control and estimation for distributed parameter systems utilizing mobile actuator-sensor network.

    PubMed

    Mu, Wenying; Cui, Baotong; Li, Wen; Jiang, Zhengxian

    2014-07-01

    This paper proposes a scheme for non-collocated moving actuating and sensing devices which is unitized for improving performance in distributed parameter systems. By Lyapunov stability theorem, each moving actuator/sensor agent velocity is obtained. To enhance state estimation of a spatially distributes process, two kinds of filters with consensus terms which penalize the disagreement of the estimates are considered. Both filters can result in the well-posedness of the collective dynamics of state errors and can converge to the plant state. Numerical simulations demonstrate that the effectiveness of such a moving actuator-sensor network in enhancing system performance and the consensus filters converge faster to the plant state when consensus terms are included. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  9. A simplified analysis of the multigrid V-cycle as a fast elliptic solver

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Taasan, Shlomo

    1988-01-01

    For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.

  10. Effects of heterogeneous convergence rate on consensus in opinion dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong

    2018-06-01

    The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.

  11. The brief negative symptom scale: validation of the German translation and convergent validity with self-rated anhedonia and observer-rated apathy.

    PubMed

    Bischof, Martin; Obermann, Caitriona; Hartmann, Matthias N; Hager, Oliver M; Kirschner, Matthias; Kluge, Agne; Strauss, Gregory P; Kaiser, Stefan

    2016-11-22

    Negative symptoms are considered core symptoms of schizophrenia. The Brief Negative Symptom Scale (BNSS) was developed to measure this symptomatic dimension according to a current consensus definition. The present study examined the psychometric properties of the German version of the BNSS. To expand former findings on convergent validity, we employed the Temporal Experience Pleasure Scale (TEPS), a hedonic self-report that distinguishes between consummatory and anticipatory pleasure. Additionally, we addressed convergent validity with observer-rated assessment of apathy with the Apathy Evaluation Scale (AES), which was completed by the patient's primary nurse. Data were collected from 75 in- and outpatients from the Psychiatric Hospital, University Zurich diagnosed with either schizophrenia or schizoaffective disorder. We assessed convergent and discriminant validity, internal consistency and inter-rater reliability. We largely replicated the findings of the original version showing good psychometric properties of the BNSS. In addition, the primary nurses evaluation correlated moderately with interview-based clinician rating. BNSS anhedonia items showed good convergent validity with the TEPS. Overall, the German BNSS shows good psychometric properties comparable to the original English version. Convergent validity extends beyond interview-based assessments of negative symptoms to self-rated anhedonia and observer-rated apathy.

  12. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  13. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  14. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  15. Rotor cascade shape optimization with unsteady passing wakes using implicit dual time stepping method

    NASA Astrophysics Data System (ADS)

    Lee, Eun Seok

    2000-10-01

    An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.

  16. Retrieval of ice thickness from polarimetric SAR data

    NASA Technical Reports Server (NTRS)

    Kwok, R.; Yueh, S. H.; Nghiem, S. V.; Huynh, D. D.

    1993-01-01

    We describe a potential procedure for retrieving ice thickness from multi-frequency polarimetric SAR data for thin ice. This procedure includes first masking out the thicker ice types with a simple classifier and then deriving the thickness of the remaining pixels using a model-inversion technique. The technique used to derive ice thickness from polarimetric observations is provided by a numerical estimator or neural network. A three-layer perceptron implemented with the backpropagation algorithm is used in this investigation with several improved aspects for a faster convergence rate and a better accuracy of the neural network. These improvements include weight initialization, normalization of the output range, the selection of offset constant, and a heuristic learning algorithm. The performance of the neural network is demonstrated by using training data generated by a theoretical scattering model for sea ice matched to the database of interest. The training data are comprised of the polarimetric backscattering coefficients of thin ice and the corresponding input ice parameters to the scattering model. The retrieved ice thickness from the theoretical backscattering coefficients is compare with the input ice thickness to the scattering model to illustrate the accuracy of the inversion method. Results indicate that the network convergence rate and accuracy are higher when multi-frequency training sets are presented. In addition, the dominant backscattering coefficients in retrieving ice thickness are found by comparing the behavior of the network trained backscattering data at various incidence angels. After the neural network is trained with the theoretical backscattering data at various incidence anges, the interconnection weights between nodes are saved and applied to the experimental data to be investigated. In this paper, we illustrate the effectiveness of this technique using polarimetric SAR data collected by the JPL DC-8 radar over a sea ice scene.

  17. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  18. Cued Memory Retrieval Exhibits Reinstatement of High Gamma Power on a Faster Timescale in the Left Temporal Lobe and Prefrontal Cortex

    PubMed Central

    Shaikhouni, Ammar

    2017-01-01

    Converging evidence suggests that reinstatement of neural activity underlies our ability to successfully retrieve memories. However, the temporal dynamics of reinstatement in the human cortex remain poorly understood. One possibility is that neural activity during memory retrieval, like replay of spiking neurons in the hippocampus, occurs at a faster timescale than during encoding. We tested this hypothesis in 34 participants who performed a verbal episodic memory task while we recorded high gamma (62–100 Hz) activity from subdural electrodes implanted for seizure monitoring. We show that reinstatement of distributed patterns of high gamma activity occurs faster than during encoding. Using a time-warping algorithm, we quantify the timescale of the reinstatement and identify brain regions that show significant timescale differences between encoding and retrieval. Our data suggest that temporally compressed reinstatement of cortical activity is a feature of cued memory retrieval. SIGNIFICANCE STATEMENT We show that cued memory retrieval reinstates neural activity on a faster timescale than was present during encoding. Our data therefore provide a link between reinstatement of neural activity in the cortex and spontaneous replay of cortical and hippocampal spiking activity, which also exhibits temporal compression, and suggest that temporal compression may be a universal feature of memory retrieval. PMID:28336569

  19. Recruitment of faster motor units is associated with greater rates of fascicle strain and rapid changes in muscle force during locomotion

    PubMed Central

    Lee, Sabrina S. M.; de Boef Miara, Maria; Arnold, Allison S.; Biewener, Andrew A.; Wakeling, James M.

    2013-01-01

    SUMMARY Animals modulate the power output needed for different locomotor tasks by changing muscle forces and fascicle strain rates. To generate the necessary forces, appropriate motor units must be recruited. Faster motor units have faster activation–deactivation rates than slower motor units, and they contract at higher strain rates; therefore, recruitment of faster motor units may be advantageous for tasks that involve rapid movements or high rates of work. This study identified motor unit recruitment patterns in the gastrocnemii muscles of goats and examined whether faster motor units are recruited when locomotor speed is increased. The study also examined whether locomotor tasks that elicit faster (or slower) motor units are associated with increased (or decreased) in vivo tendon forces, force rise and relaxation rates, fascicle strains and/or strain rates. Electromyography (EMG), sonomicrometry and muscle-tendon force data were collected from the lateral and medial gastrocnemius muscles of goats during level walking, trotting and galloping and during inclined walking and trotting. EMG signals were analyzed using wavelet and principal component analyses to quantify changes in the EMG frequency spectra across the different locomotor conditions. Fascicle strain and strain rate were calculated from the sonomicrometric data, and force rise and relaxation rates were determined from the tendon force data. The results of this study showed that faster motor units were recruited as goats increased their locomotor speeds from level walking to galloping. Slow inclined walking elicited EMG intensities similar to those of fast level galloping but different EMG frequency spectra, indicating that recruitment of the different motor unit types depended, in part, on characteristics of the task. For the locomotor tasks and muscles analyzed here, recruitment patterns were generally associated with in vivo fascicle strain rates, EMG intensity and tendon force. Together, these data provide new evidence that changes in motor unit recruitment have an underlying mechanical basis, at least for certain locomotor tasks. PMID:22972893

  20. Recruitment of faster motor units is associated with greater rates of fascicle strain and rapid changes in muscle force during locomotion.

    PubMed

    Lee, Sabrina S M; de Boef Miara, Maria; Arnold, Allison S; Biewener, Andrew A; Wakeling, James M

    2013-01-15

    Animals modulate the power output needed for different locomotor tasks by changing muscle forces and fascicle strain rates. To generate the necessary forces, appropriate motor units must be recruited. Faster motor units have faster activation-deactivation rates than slower motor units, and they contract at higher strain rates; therefore, recruitment of faster motor units may be advantageous for tasks that involve rapid movements or high rates of work. This study identified motor unit recruitment patterns in the gastrocnemii muscles of goats and examined whether faster motor units are recruited when locomotor speed is increased. The study also examined whether locomotor tasks that elicit faster (or slower) motor units are associated with increased (or decreased) in vivo tendon forces, force rise and relaxation rates, fascicle strains and/or strain rates. Electromyography (EMG), sonomicrometry and muscle-tendon force data were collected from the lateral and medial gastrocnemius muscles of goats during level walking, trotting and galloping and during inclined walking and trotting. EMG signals were analyzed using wavelet and principal component analyses to quantify changes in the EMG frequency spectra across the different locomotor conditions. Fascicle strain and strain rate were calculated from the sonomicrometric data, and force rise and relaxation rates were determined from the tendon force data. The results of this study showed that faster motor units were recruited as goats increased their locomotor speeds from level walking to galloping. Slow inclined walking elicited EMG intensities similar to those of fast level galloping but different EMG frequency spectra, indicating that recruitment of the different motor unit types depended, in part, on characteristics of the task. For the locomotor tasks and muscles analyzed here, recruitment patterns were generally associated with in vivo fascicle strain rates, EMG intensity and tendon force. Together, these data provide new evidence that changes in motor unit recruitment have an underlying mechanical basis, at least for certain locomotor tasks.

  1. A Novel Iterative Scheme for the Very Fast and Accurate Solution of Non-LTE Radiative Transfer Problems

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, J.; Fabiani Bendicho, P.

    1995-12-01

    Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.

  2. Rapid divergence and convergence of life-history in experimentally evolved Drosophila melanogaster.

    PubMed

    Burke, Molly K; Barter, Thomas T; Cabral, Larry G; Kezos, James N; Phillips, Mark A; Rutledge, Grant A; Phung, Kevin H; Chen, Richard H; Nguyen, Huy D; Mueller, Laurence D; Rose, Michael R

    2016-09-01

    Laboratory selection experiments are alluring in their simplicity, power, and ability to inform us about how evolution works. A longstanding challenge facing evolution experiments with metazoans is that significant generational turnover takes a long time. In this work, we present data from a unique system of experimentally evolved laboratory populations of Drosophila melanogaster that have experienced three distinct life-history selection regimes. The goal of our study was to determine how quickly populations of a certain selection regime diverge phenotypically from their ancestors, and how quickly they converge with independently derived populations that share a selection regime. Our results indicate that phenotypic divergence from an ancestral population occurs rapidly, within dozens of generations, regardless of that population's evolutionary history. Similarly, populations sharing a selection treatment converge on common phenotypes in this same time frame, regardless of selection pressures those populations may have experienced in the past. These patterns of convergence and divergence emerged much faster than expected, suggesting that intermediate evolutionary history has transient effects in this system. The results we draw from this system are applicable to other experimental evolution projects, and suggest that many relevant questions can be sufficiently tested on shorter timescales than previously thought. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  3. The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake

    NASA Technical Reports Server (NTRS)

    Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.

    1986-01-01

    The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.

  4. Faster and More Accurate Transport Procedures for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Badavi, Francis F.

    2010-01-01

    Several aspects of code verification are examined for HZETRN. First, a detailed derivation of the numerical marching algorithms is given. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of various coding errors is also given, and the impact of these errors on exposure quantities is shown. Finally, a coupled convergence study is conducted. From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is also determined that almost all of the discretization error in HZETRN is caused by charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons are given for three applications in which HZETRN is commonly used. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  5. Geophysical constraints on geodynamic processes at convergent margins: A global perspective

    NASA Astrophysics Data System (ADS)

    Artemieva, Irina; Thybo, Hans; Shulgin, Alexey

    2016-04-01

    Convergent margins, being the boundaries between colliding lithospheric plates, form the most disastrous areas in the world due to intensive, strong seismicity and volcanism. We review global geophysical data in order to illustrate the effects of the plate tectonic processes at convergent margins on the crustal and upper mantle structure, seismicity, and geometry of subducting slab. We present global maps of free-air and Bouguer gravity anomalies, heat flow, seismicity, seismic Vs anomalies in the upper mantle, and plate convergence rate, as well as 20 profiles across different convergent margins. A global analysis of these data for three types of convergent margins, formed by ocean-ocean, ocean-continent, and continent-continent collisions, allows us to recognize the following patterns. (1) Plate convergence rate depends on the type of convergent margins and it is significantly larger when, at least, one of the plates is oceanic. However, the oldest oceanic plate in the Pacific ocean has the smallest convergence rate. (2) The presence of an oceanic plate is, in general, required for generation of high-magnitude (M N 8.0) earthquakes and for generating intermediate and deep seismicity along the convergent margins. When oceanic slabs subduct beneath a continent, a gap in the seismogenic zone exists at depths between ca. 250 km and 500 km. Given that the seismogenic zone terminates at ca. 200 km depth in case of continent-continent collision, we propose oceanic origin of subducting slabs beneath the Zagros, the Pamir, and the Vrancea zone. (3) Dip angle of the subducting slab in continent-ocean collision does not correlate neither with the age of subducting oceanic slab, nor with the convergence rate. For ocean-ocean subduction, clear trends are recognized: steeply dipping slabs are characteristic of young subducting plates and of oceanic plates with high convergence rate, with slab rotation towards a near-vertical dip angle at depths below ca. 500 km at very high convergence rate. (4) Local isostasy is not satisfied at the convergent margins as evidenced by strong free air gravity anomalies of positive and negative signs. However, near-isostatic equilibrium may exist in broad zones of distributed deformation such as Tibet. (5) No systematic patterns are recognized in heat flow data due to strong heterogeneity of measured values which are strongly affected by hydrothermal circulation, magmatic activity, crustal faulting, horizontal heat transfer, and also due to low number of heat flow measurements across many margins. (6) Low upper mantle Vs seismic velocities beneath the convergent margins are restricted to the upper 150 km and may be related to mantle wedge melting which is confined to shallow mantle levels. Artemieva, I.M., Thybo, H., and Shulgin, A., 2015. Geophysical constraints on geodynamic processes at convergent margins: A global perspective. Gondwana Research, http://dx.doi.org/10.1016/j.gr.2015.06.010

  6. Dynamics of the near response under natural viewing conditions with an open-view sensor

    PubMed Central

    Chirre, Emmanuel; Prieto, Pedro; Artal, Pablo

    2015-01-01

    We have studied the temporal dynamics of the near response (accommodation, convergence and pupil constriction) in healthy subjects when accommodation was performed under natural binocular and monocular viewing conditions. A binocular open-view multi-sensor based on an invisible infrared Hartmann-Shack sensor was used for non-invasive measurements of both eyes simultaneously in real time at 25Hz. Response times for each process under different conditions were measured. The accommodative responses for binocular vision were faster than for monocular conditions. When one eye was blocked, accommodation and convergence were triggered simultaneously and synchronized, despite the fact that no retinal disparity was available. We found that upon the onset of the near target, the unblocked eye rapidly changes its line of sight to fix it on the stimulus while the blocked eye moves in the same direction, producing the equivalent to a saccade, but then converges to the (blocked) target in synchrony with accommodation. This open-view instrument could be further used for additional experiments with other tasks and conditions. PMID:26504666

  7. Linear homotopy solution of nonlinear systems of equations in geodesy

    NASA Astrophysics Data System (ADS)

    Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.

    2010-01-01

    A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.

  8. Convergence of shock waves generated by underwater electrical explosion of cylindrical wire arrays between different boundary geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanuka, D.; Zinowits, H. E.; Krasik, Ya. E.

    The results of experiments and numerical simulations of a shock wave propagating between either conical or parabolic bounding walls are presented. The shock wave was generated by a microsecond timescale underwater electrical explosion of a cylindrical wire array supplied by a current pulse having an amplitude of ∼230 kA and a rise time of ∼1 μs. It is shown that with the same energy density deposition into the exploding wire array, the shock wave converges faster between parabolic walls, and as a result, the pressure in the vicinity of convergence is ∼2.3 times higher than in the case of conical walls. Themore » results obtained are compared to those of earlier experiments [Antonov et al., Appl. Phys. Lett. 102, 124104 (2013)] with explosions of spherical wire arrays. It is shown that at a distance of ∼400 μm from the implosion origin the pressure obtained in the current experiments is higher than for the case of spherical wire arrays.« less

  9. Computational and Physical Analysis of Catalytic Compounds

    NASA Astrophysics Data System (ADS)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  10. An analysis of numerical convergence in discrete velocity gas dynamics for internal flows

    NASA Astrophysics Data System (ADS)

    Sekaran, Aarthi; Varghese, Philip; Goldstein, David

    2018-07-01

    The Discrete Velocity Method (DVM) for solving the Boltzmann equation has significant advantages in the modeling of non-equilibrium and near equilibrium flows as compared to other methods in terms of reduced statistical noise, faster solutions and the ability to handle transient flows. Yet the DVM performance for rarefied flow in complex, small-scale geometries, in microelectromechanical (MEMS) devices for instance, is yet to be studied in detail. The present study focuses on the performance of the DVM for locally large Knudsen number flows of argon around sharp corners and other sources for discontinuities in the distribution function. Our analysis details the nature of the solution for some benchmark cases and introduces the concept of solution convergence for the transport terms in the discrete velocity Boltzmann equation. The limiting effects of the velocity space discretization are also investigated and the constraints on obtaining a robust, consistent solution are derived. We propose techniques to maintain solution convergence and demonstrate the implementation of a specific strategy and its effect on the fidelity of the solution for some benchmark cases.

  11. An Angular Method with Position Control for Block Mesh Squareness Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, J.; Stillman, D.

    We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.

  12. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network.

    PubMed

    Poernomo, Alvin; Kang, Dae-Ki

    2018-08-01

    Training a deep neural network with a large number of parameters often leads to overfitting problem. Recently, Dropout has been introduced as a simple, yet effective regularization approach to combat overfitting in such models. Although Dropout has shown remarkable results on many deep neural network cases, its actual effect on CNN has not been thoroughly explored. Moreover, training a Dropout model will significantly increase the training time as it takes longer time to converge than a non-Dropout model with the same architecture. To deal with these issues, we address Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model. Biased Dropout divides the hidden units in a certain layer into two groups based on their magnitude and applies different Dropout rate to each group appropriately. Hidden units with higher activation value, which give more contributions to the network final performance, will be retained by a lower Dropout rate, while units with lower activation value will be exposed to a higher Dropout rate to compensate the previous part. The second approach is Crossmap Dropout, which is an extension of the regular Dropout in convolution layer. Each feature map in a convolution layer has a strong correlation between each other, particularly in every identical pixel location in each feature map. Crossmap Dropout tries to maintain this important correlation yet at the same time break the correlation between each adjacent pixel with respect to all feature maps by applying the same Dropout mask to all feature maps, so that all pixels or units in equivalent positions in each feature map will be either dropped or active during training. Our experiment with various benchmark datasets shows that our approaches provide better generalization than the regular Dropout. Moreover, our Biased Dropout takes faster time to converge during training phase, suggesting that assigning noise appropriately in hidden units can lead to an effective regularization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Terminal Sliding Mode Tracking Controller Design for Automatic Guided Vehicle

    NASA Astrophysics Data System (ADS)

    Chen, Hongbin

    2018-03-01

    Based on sliding mode variable structure control theory, the path tracking problem of automatic guided vehicle is studied, proposed a controller design method based on the terminal sliding mode. First of all, through analyzing the characteristics of the automatic guided vehicle movement, the kinematics model is presented. Then to improve the traditional expression of terminal sliding mode, design a nonlinear sliding mode which the convergence speed is faster than the former, verified by theoretical analysis, the design of sliding mode is steady and fast convergence in the limited time. Finally combining Lyapunov method to design the tracking control law of automatic guided vehicle, the controller can make the automatic guided vehicle track the desired trajectory in the global sense as well as in finite time. The simulation results verify the correctness and effectiveness of the control law.

  14. Maternal and child mortality indicators across 187 countries of the world: converging or diverging.

    PubMed

    Goli, Srinivas; Arokiasamy, Perianayagam

    2014-01-01

    This study reassessed the progress achieved since 1990 in maternal and child mortality indicators to test whether the progress is converging or diverging across countries worldwide. The convergence process is examined using standard parametric and non-parametric econometric models of convergence. The results of absolute convergence estimates reveal that progress in maternal and child mortality indicators is diverging for the entire period of 1990-2010 [maternal mortality ratio (MMR) - β = .00033, p < .574; neonatal mortality rate (NNMR) - β = .04367, p < .000; post-neonatal mortality rate (PNMR) - β = .02677, p < .000; under-five mortality rate (U5MR) - β = .00828, p < .000)]. In the recent period, such divergence is replaced with convergence for MMR but diverged for all the child mortality indicators. The results of Kernel density estimate reveal considerable reduction in divergence of MMR for the recent period; however, the Kernel density distribution plots show more than one 'peak' which indicates the emergence of convergence clubs based on their mortality levels. For child mortality indicators, the Kernel estimates suggest that divergence is in progress across the countries worldwide but tended to converge for countries with low mortality levels. A mere progress in global averages of maternal and child mortality indicators among a global cross-section of countries does not warranty convergence unless there is a considerable reduction in variance, skewness and range of change.

  15. Kinematics, Exhumation, and Sedimentation of the North Central Andes (Bolivia): An Integrated Thermochronometer and Thermokinematic Modeling Approach

    NASA Astrophysics Data System (ADS)

    Rak, Adam J.; McQuarrie, Nadine; Ehlers, Todd A.

    2017-11-01

    Quantifying mountain building processes in convergent orogens requires determination of the timing and rate of deformation in the overriding plate. In the central Andes, large discrepancies in both timing and rate of deformation prevent evaluating the shortening history in light of internal or external forcing factors. Geologic map patterns, age and location of reset thermochronometer systems, and synorogenic sediment distribution are all a function of the geometry, kinematics, and rate of deformation in a fold-thrust-belt-foreland basin (FTB-FB) system. To determine the timing and rate of deformation in the northern Bolivian Andes, we link thermokinematic modeling to a sequentially forward modeled, balanced cross section isostatically accounting for thrust loads and erosion. Displacement vectors, in 10 km increments, are assigned variable ages to create velocity fields in a thermokinematic model for predicting thermochronometer ages. We match both the pattern of predicted cooling ages with the across strike pattern of measured zircon fission track, apatite fission track, and apatite (U-Th)/He cooling ages as well as the modeled age of FB formations to published sedimentary sections. Results indicate that northern Bolivian FTB deformation started at 50 Ma and may have begun as early as 55 Ma. Acceptable rates of shortening permit either a constant rate of shortening ( 4-5 mm/yr) or varying shortening rates with faster rates (7-10 mm/yr) at 45-50 Ma and 12-8 Ma, significantly slower rates (2-4 mm/yr) from 35 to 15 Ma and indicate the northern Bolivian Subandes started deforming between 19 and 14 Ma.

  16. Development of advanced Navier-Stokes solver

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan

    1994-01-01

    The objective of research was to develop and validate new computational algorithms for solving the steady and unsteady Euler and Navier-Stokes equations. The end-products are new three-dimensional Euler and Navier-Stokes codes that are faster, more reliable, more accurate, and easier to use. The three-dimensional Euler and full/thin-layer Reynolds-averaged Navier-Stokes equations for compressible/incompressible flows are solved on structured hexahedral grids. The Baldwin-Lomax algebraic turbulence model is used for closure. The space discretization is based on a cell-centered finite-volume method augmented by a variety of numerical dissipation models with optional total variation diminishing limiters. The governing equations are integrated in time by an implicit method based on lower-upper factorization and symmetric Gauss-Seidel relaxation. The algorithm is vectorized on diagonal planes of sweep using two-dimensional indices in three dimensions. Convergence rates and the robustness of the codes are enhanced by the use of an implicit full approximation storage multigrid method.

  17. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the controlmore » of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.« less

  18. Faster PET reconstruction with a stochastic primal-dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane

    2017-08-01

    Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.

  19. An Improved Quantum-Behaved Particle Swarm Optimization Algorithm with Elitist Breeding for Unconstrained Optimization.

    PubMed

    Yang, Zhen-Lun; Wu, Angus; Min, Hua-Qing

    2015-01-01

    An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

  20. Elastic least-squares reverse time migration with velocities and density perturbation

    NASA Astrophysics Data System (ADS)

    Qu, Yingming; Li, Jinli; Huang, Jianping; Li, Zhenchun

    2018-02-01

    Elastic least-squares reverse time migration (LSRTM) based on the non-density-perturbation assumption can generate false-migrated interfaces caused by density variations. We perform an elastic LSRTM scheme with density variations for multicomponent seismic data to produce high-quality images in Vp, Vs and ρ components. However, the migrated images may suffer from crosstalk artefacts caused by P- and S-waves coupling in elastic LSRTM no matter what model parametrizations used. We have proposed an elastic LSRTM with density variations method based on wave modes separation to reduce these crosstalk artefacts by using P- and S-wave decoupled elastic velocity-stress equations to derive demigration equations and gradient formulae with respect to Vp, Vs and ρ. Numerical experiments with synthetic data demonstrate the capability and superiority of the proposed method. The imaging results suggest that our method promises imaging results with higher quality and has a faster residual convergence rate. Sensitivity analysis of migration velocity, migration density and stochastic noise verifies the robustness of the proposed method for field data.

  1. GLOBAL RATES OF CONVERGENCE OF THE MLES OF LOG-CONCAVE AND s-CONCAVE DENSITIES

    PubMed Central

    Doss, Charles R.; Wellner, Jon A.

    2017-01-01

    We establish global rates of convergence for the Maximum Likelihood Estimators (MLEs) of log-concave and s-concave densities on ℝ. The main finding is that the rate of convergence of the MLE in the Hellinger metric is no worse than n−2/5 when −1 < s < ∞ where s = 0 corresponds to the log-concave case. We also show that the MLE does not exist for the classes of s-concave densities with s < −1. PMID:28966409

  2. Magnified gradient function with deterministic weight modification in adaptive learning.

    PubMed

    Ng, Sin-Chun; Cheung, Chi-Chung; Leung, Shu-Hung

    2004-11-01

    This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.

  3. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  4. Size-extensive QCISDT — implementation and application

    NASA Astrophysics Data System (ADS)

    Cremer, Dieter; He, Zhi

    1994-05-01

    A size-extensive quadratic CI method with single (S), double (D), and triple (T) excitations, QCISDT, has been derived by appropriate cancellation of disconnected terms in the CISDT projection equations. Matrix elements of the new QCI method have been evaluated in terms of two-electron integrals and applied to a number of atoms and small molecules. While QCISDT results are of similar accuracy to CCSDT results, the new method is easier to implement, converges in many cases faster and, thereby, leads to advantages compared to CCSDT.

  5. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  6. Performance Ratings: Designs for Evaluating Their Validity and Accuracy.

    DTIC Science & Technology

    1986-07-01

    ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of

  7. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  8. Solving Upwind-Biased Discretizations: Defect-Correction Iterations

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    1999-01-01

    This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.

  9. Cued Memory Retrieval Exhibits Reinstatement of High Gamma Power on a Faster Timescale in the Left Temporal Lobe and Prefrontal Cortex.

    PubMed

    Yaffe, Robert B; Shaikhouni, Ammar; Arai, Jennifer; Inati, Sara K; Zaghloul, Kareem A

    2017-04-26

    Converging evidence suggests that reinstatement of neural activity underlies our ability to successfully retrieve memories. However, the temporal dynamics of reinstatement in the human cortex remain poorly understood. One possibility is that neural activity during memory retrieval, like replay of spiking neurons in the hippocampus, occurs at a faster timescale than during encoding. We tested this hypothesis in 34 participants who performed a verbal episodic memory task while we recorded high gamma (62-100 Hz) activity from subdural electrodes implanted for seizure monitoring. We show that reinstatement of distributed patterns of high gamma activity occurs faster than during encoding. Using a time-warping algorithm, we quantify the timescale of the reinstatement and identify brain regions that show significant timescale differences between encoding and retrieval. Our data suggest that temporally compressed reinstatement of cortical activity is a feature of cued memory retrieval. SIGNIFICANCE STATEMENT We show that cued memory retrieval reinstates neural activity on a faster timescale than was present during encoding. Our data therefore provide a link between reinstatement of neural activity in the cortex and spontaneous replay of cortical and hippocampal spiking activity, which also exhibits temporal compression, and suggest that temporal compression may be a universal feature of memory retrieval. Copyright © 2017 the authors 0270-6474/17/374472-09$15.00/0.

  10. Faster and more accurate transport procedures for HZETRN

    NASA Astrophysics Data System (ADS)

    Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.

    2010-12-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.

  11. Faster and more accurate transport procedures for HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, T.C., E-mail: Tony.C.Slaba@nasa.go; Blattnig, S.R., E-mail: Steve.R.Blattnig@nasa.go; Badavi, F.F., E-mail: Francis.F.Badavi@nasa.go

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A {<=} 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete descriptionmore » of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm{sup 2} in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm{sup 2} of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.« less

  12. Using Informatics-, Bioinformatics- and Genomics-Based Approaches for the Molecular Surveillance and Detection of Biothreat Agents

    NASA Astrophysics Data System (ADS)

    Seto, Donald

    The convergence and wealth of informatics, bioinformatics and genomics methods and associated resources allow a comprehensive and rapid approach for the surveillance and detection of bacterial and viral organisms. Coupled with the continuing race for the fastest, most cost-efficient and highest-quality DNA sequencing technology, that is, "next generation sequencing", the detection of biological threat agents by `cheaper and faster' means is possible. With the application of improved bioinformatic tools for the understanding of these genomes and for parsing unique pathogen genome signatures, along with `state-of-the-art' informatics which include faster computational methods, equipment and databases, it is feasible to apply new algorithms to biothreat agent detection. Two such methods are high-throughput DNA sequencing-based and resequencing microarray-based identification. These are illustrated and validated by two examples involving human adenoviruses, both from real-world test beds.

  13. A robust nonlinear position observer for synchronous motors with relaxed excitation conditions

    NASA Astrophysics Data System (ADS)

    Bobtsov, Alexey; Bazylev, Dmitry; Pyrkin, Anton; Aranovskiy, Stanislav; Ortega, Romeo

    2017-04-01

    A robust, nonlinear and globally convergent rotor position observer for surface-mounted permanent magnet synchronous motors was recently proposed by the authors. The key feature of this observer is that it requires only the knowledge of the motor's resistance and inductance. Using some particular properties of the mathematical model it is shown that the problem of state observation can be translated into one of estimation of two constant parameters, which is carried out with a standard gradient algorithm. In this work, we propose to replace this estimator with a new one called dynamic regressor extension and mixing, which has the following advantages with respect to gradient estimators: (1) the stringent persistence of excitation (PE) condition of the regressor is not necessary to ensure parameter convergence; (2) the latter is guaranteed requiring instead a non-square-integrability condition that has a clear physical meaning in terms of signal energy; (3) if the regressor is PE, the new observer (like the old one) ensures convergence is exponential, entailing some robustness properties to the observer; (4) the new estimator includes an additional filter that constitutes an additional degree of freedom to satisfy the non-square integrability condition. Realistic simulation results show significant performance improvement of the position observer using the new parameter estimator, with a less oscillatory behaviour and a faster convergence speed.

  14. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  15. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  16. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  17. ``Dual Society Ever Precedes through Trevor SWAN & Wassily Leontief''

    NASA Astrophysics Data System (ADS)

    Maksoed, Wh-

    ``Dual Society'' introduced by E.F. Schumacher are classified as non-stabile society who easy to shakes by politics uncertainties.in Robert J. Barro & X. Sala-i-Martin: ``Convergence''states: `` a key economic issue is whether poor countries or regions tend to grow faster than rich ones''.For growth models from Roy Forbes Herrod & EvseyDomar, three assumptions described by Eduardo Ley are?[U+2639]i). output is proportional to capital,(ii). Investment ex anteequals saving & (iii) saving proportional to output. Underlines Trevor SWAN, developing countries differ significantly among themselves. Economic growth models comprises Herrod-Domar growth model, Solow growth model & endogenous growth model.Further, for five stages of economic groeth from Rostov of Leontief technology, ever retrieves the Jens Beckert:''Institutional Isomorphism revisited: Convergence & Divergence in Institutional Change''instead Frumkin's ``Institutional Isomorphism & Public Sector Organizations''. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.

  18. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  19. Efficient isoparametric integration over arbitrary space-filling Voronoi polyhedra for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Aftab; Khan, S. N.; Wilson, Brian G.

    2011-07-06

    A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less

  20. Multigrid methods for numerical simulation of laminar diffusion flames

    NASA Technical Reports Server (NTRS)

    Liu, C.; Liu, Z.; Mccormick, S.

    1993-01-01

    This paper documents the result of a computational study of multigrid methods for numerical simulation of 2D diffusion flames. The focus is on a simplified combustion model, which is assumed to be a single step, infinitely fast and irreversible chemical reaction with five species (C3H8, O2, N2, CO2 and H2O). A fully-implicit second-order hybrid scheme is developed on a staggered grid, which is stretched in the streamwise coordinate direction. A full approximation multigrid scheme (FAS) based on line distributive relaxation is developed as a fast solver for the algebraic equations arising at each time step. Convergence of the process for the simplified model problem is more than two-orders of magnitude faster than other iterative methods, and the computational results show good grid convergence, with second-order accuracy, as well as qualitatively agreement with the results of other researchers.

  1. Convergence optimization of parametric MLEM reconstruction for estimation of Patlak plot parameters.

    PubMed

    Angelis, Georgios I; Thielemans, Kris; Tziortzi, Andri C; Turkheimer, Federico E; Tsoumpas, Charalampos

    2011-07-01

    In dynamic positron emission tomography data many researchers have attempted to exploit kinetic models within reconstruction such that parametric images are estimated directly from measurements. This work studies a direct parametric maximum likelihood expectation maximization algorithm applied to [(18)F]DOPA data using reference-tissue input function. We use a modified version for direct reconstruction with a gradually descending scheme of subsets (i.e. 18-6-1) initialized with the FBP parametric image for faster convergence and higher accuracy. The results compared with analytic reconstructions show quantitative robustness (i.e. minimal bias) and clinical reproducibility within six human acquisitions in the region of clinical interest. Bland-Altman plots for all the studies showed sufficient quantitative agreement between the direct reconstructed parametric maps and the indirect FBP (--0.035x+0.48E--5). Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less

  3. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the helpmore » of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.« less

  4. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  5. Faster-X evolution: Theory and evidence from Drosophila.

    PubMed

    Charlesworth, Brian; Campos, José L; Jackson, Benjamin C

    2018-02-12

    A faster rate of adaptive evolution of X-linked genes compared with autosomal genes can be caused by the fixation of recessive or partially recessive advantageous mutations, due to the full expression of X-linked mutations in hemizygous males. Other processes, including recombination rate and mutation rate differences between X chromosomes and autosomes, may also cause faster evolution of X-linked genes. We review population genetics theory concerning the expected relative values of variability and rates of evolution of X-linked and autosomal DNA sequences. The theoretical predictions are compared with data from population genomic studies of several species of Drosophila. We conclude that there is evidence for adaptive faster-X evolution of several classes of functionally significant nucleotides. We also find evidence for potential differences in mutation rates between X-linked and autosomal genes, due to differences in mutational bias towards GC to AT mutations. Many aspects of the data are consistent with the male hemizygosity model, although not all possible confounding factors can be excluded. © 2018 John Wiley & Sons Ltd.

  6. Nonlinear convergence active vibration absorber for single and multiple frequency vibration control

    NASA Astrophysics Data System (ADS)

    Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang

    2017-12-01

    This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.

  7. Hillslope-scale experiment demonstrates role of convergence during two-step saturation

    USGS Publications Warehouse

    Gevaert, A. I.; Teuling, A. J.; Uijlenhoet, R.; DeLong, Stephen B.; Huxman, T. E.; Pangle, L. A.; Breshears, David D.; Chorover, J.; Pelletier, John D.; Saleska, S. R.; Zeng, X.; Troch, Peter A.

    2014-01-01

    Subsurface flow and storage dynamics at hillslope scale are difficult to ascertain, often in part due to a lack of sufficient high-resolution measurements and an incomplete understanding of boundary conditions, soil properties, and other environmental aspects. A continuous and extreme rainfall experiment on an artificial hillslope at Biosphere 2's Landscape Evolution Observatory (LEO) resulted in saturation excess overland flow and gully erosion in the convergent hillslope area. An array of 496 soil moisture sensors revealed a two-step saturation process. First, the downward movement of the wetting front brought soils to a relatively constant but still unsaturated moisture content. Second, soils were brought to saturated conditions from below in response to rising water tables. Convergent areas responded faster than upslope areas, due to contributions from lateral subsurface flow driven by the topography of the bottom boundary, which is comparable to impermeable bedrock in natural environments. This led to the formation of a groundwater ridge in the convergent area, triggering saturation excess runoff generation. This unique experiment demonstrates, at very high spatial and temporal resolution, the role of convergence on subsurface storage and flow dynamics. The results bring into question the representation of saturation excess overland flow in conceptual rainfall-runoff models and land-surface models, since flow is gravity-driven in many of these models and upper layers cannot become saturated from below. The results also provide a baseline to study the role of the co-evolution of ecological and hydrological processes in determining landscape water dynamics during future experiments in LEO.

  8. Subterranean mammals show convergent regression in ocular genes and enhancers, along with adaptation to tunneling

    PubMed Central

    Partha, Raghavendran; Chauhan, Bharesh K; Ferreira, Zelia; Robinson, Joseph D; Lathrop, Kira; Nischal, Ken K

    2017-01-01

    The underground environment imposes unique demands on life that have led subterranean species to evolve specialized traits, many of which evolved convergently. We studied convergence in evolutionary rate in subterranean mammals in order to associate phenotypic evolution with specific genetic regions. We identified a strong excess of vision- and skin-related genes that changed at accelerated rates in the subterranean environment due to relaxed constraint and adaptive evolution. We also demonstrate that ocular-specific transcriptional enhancers were convergently accelerated, whereas enhancers active outside the eye were not. Furthermore, several uncharacterized genes and regulatory sequences demonstrated convergence and thus constitute novel candidate sequences for congenital ocular disorders. The strong evidence of convergence in these species indicates that evolution in this environment is recurrent and predictable and can be used to gain insights into phenotype–genotype relationships. PMID:29035697

  9. Orderings for conjugate gradient preconditionings

    NASA Technical Reports Server (NTRS)

    Ortega, James M.

    1991-01-01

    The effect of orderings on the rate of convergence of the conjugate gradient method with SSOR or incomplete Cholesky preconditioning is examined. Some results also are presented that help to explain why red/black ordering gives an inferior rate of convergence.

  10. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  11. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  12. Designing a multistage supply chain in cross-stage reverse logistics environments: application of particle swarm optimization algorithms.

    PubMed

    Chiang, Tzu-An; Che, Z H; Cui, Zhihua

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.

  13. Designing a Multistage Supply Chain in Cross-Stage Reverse Logistics Environments: Application of Particle Swarm Optimization Algorithms

    PubMed Central

    Chiang, Tzu-An; Che, Z. H.

    2014-01-01

    This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026

  14. Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan

    2008-02-01

    One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.

  15. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness.

  16. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  17. Increased Coal Plant Flexibility Can Improve Renewables Integration |

    Science.gov Websites

    practices that enable lower turndowns, faster starts and stops, and faster ramping between load set-points faster ramp rates and faster and less expensive starts. Flexible Load - Demand Response Resources Demand response (DR) is a load management practice of deliberately reducing or adding load to balance the system

  18. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  19. Faster eating rates are associated with higher energy intakes during an ad libitum meal, higher BMI and greater adiposity among 4·5-year-old children: results from the Growing Up in Singapore Towards Healthy Outcomes (GUSTO) cohort.

    PubMed

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R; Sadananthan, Suresh A; Velan, S Sendhil; Michael, Navin; Tint, Mya-Thway; Fortier, Marielle V; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P; Meaney, Michael J; Broekman, Birit F P; Lee, Yung Seng; Godfrey, Keith M; Chong, Mary F F; Forde, Ciarán G

    2017-04-01

    Faster eating rates are associated with increased energy intake, but little is known about the relationship between children's eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesised that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (n 386) from the Growing Up in Singapore Towards Healthy Outcomes cohort participated in a video-recorded ad libitum lunch at 4·5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). BMI and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n 153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r 0·61, P<0·001) and a positive linear relationship between eating rate and children's BMI status. Faster eaters consumed 75 % more energy content than slower eating children (Δ548 kJ (Δ131 kcal); 95 % CI 107·6, 154·4, P<0·001), and had higher whole-body (P<0·05) and subcutaneous abdominal adiposity (Δ118·3 cc; 95 % CI 24·0, 212·7, P=0·014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b 13·59; 95 % CI 7·48, 21·83). Children who ate faster had higher energy intake, and this was associated with increased BMI z-score and adiposity.

  20. Faster eating rates are associated with higher energy intakes during an Ad libitum meal, higher BMI and greater adiposity among 4.5 year old children – Results from the GUSTO cohort

    PubMed Central

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R.; Sadananthan, Suresh Anand; Velan, S. Sendhil; Michael, Navin; Tint, Mya Thway; Fortier, Marielle Valerie; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P.; Meaney, Michael J.; Broekman, Birit F.P.; Lee, Yung Seng; Godfrey, Keith M.; Chong, Mary Foong Fong; Forde, Ciarán Gerard

    2017-01-01

    Faster eating rates are associated with increased energy intake, but less is known about the relationship between children’s eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesized that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (N=386) from the Growing Up in Singapore towards Healthy Outcomes (GUSTO) cohort participated in a video-recorded ad libitum lunch at 4.5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). Body mass index (BMI) and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n=153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r=0.61, p<0.001) and a positive linear relationship between eating rate and children’s BMI status. Faster eaters consumed 75% more calories than slower eating children (Δ131 kcal, 95%CI [107.6, 154.4], p<0.001), and had higher whole-body (p<0.05) and subcutaneous abdominal adiposity (Δ118.3 cc; 95%CI [24.0, 212.7], p=0.014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b=13.59, 95% CI [7.48, 21.83]). Children who ate faster had higher energy intake, and this was associated with increased BMIz and adiposity. PMID:28462734

  1. Consistency of Eating Rate, Oral Processing Behaviours and Energy Intake across Meals

    PubMed Central

    McCrickerd, Keri; Forde, Ciaran G.

    2017-01-01

    Faster eating has been identified as a risk factor for obesity and the current study tested whether eating rate is consistent within an individual and linked to energy intake across multiple meals. Measures of ad libitum intake, eating rate, and oral processing at the same or similar test meal were recorded on four non-consecutive days for 146 participants (117 male, 29 female) recruited across four separate studies. All the meals were video recorded, and oral processing behaviours were derived through behavioural coding. Eating behaviours showed good to excellent consistency across the meals (intra-class correlation coefficients > 0.76, p < 0.001) and participants who ate faster took larger bites (β ≥ 0.39, p < 0.001) and consistently consumed more energy, independent of meal palatability, sex, body composition and reported appetite (β ≥ 0.17, p ≤ 0.025). Importantly, eating faster at one meal predicted faster eating and increased energy intake at subsequent meals (β > 0.20, p < 0.05). Faster eating is relatively consistent within individuals and is predictive of faster eating and increased energy intake at subsequent similar meals consumed in a laboratory context, independent of individual differences in body composition. PMID:28817066

  2. Continental collision slowing due to viscous mantle lithosphere rather than topography.

    PubMed

    Clark, Marin Kristen

    2012-02-29

    Because the inertia of tectonic plates is negligible, plate velocities result from the balance of forces acting at plate margins and along their base. Observations of past plate motion derived from marine magnetic anomalies provide evidence of how continental deformation may contribute to plate driving forces. A decrease in convergence rate at the inception of continental collision is expected because of the greater buoyancy of continental than oceanic lithosphere, but post-collisional rates are less well understood. Slowing of convergence has generally been attributed to the development of high topography that further resists convergent motion; however, the role of deforming continental mantle lithosphere on plate motions has not previously been considered. Here I show that the rate of India's penetration into Eurasia has decreased exponentially since their collision. The exponential decrease in convergence rate suggests that contractional strain across Tibet has been constant throughout the collision at a rate of 7.03 × 10(-16) s(-1), which matches the current rate. A constant bulk strain rate of the orogen suggests that convergent motion is resisted by constant average stress (constant force) applied to a relatively uniform layer or interface at depth. This finding follows new evidence that the mantle lithosphere beneath Tibet is intact, which supports the interpretation that the long-term strain history of Tibet reflects deformation of the mantle lithosphere. Under conditions of constant stress and strength, the deforming continental lithosphere creates a type of viscous resistance that affects plate motion irrespective of how topography evolved.

  3. Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.

  4. Convergence properties of halo merger trees; halo and substructure merger rates across cosmic history

    NASA Astrophysics Data System (ADS)

    Poole, Gregory B.; Mutch, Simon J.; Croton, Darren J.; Wyithe, Stuart

    2017-12-01

    We introduce GBPTREES: an algorithm for constructing merger trees from cosmological simulations, designed to identify and correct for pathological cases introduced by errors or ambiguities in the halo finding process. GBPTREES is built upon a halo matching method utilizing pseudo-radial moments constructed from radially sorted particle ID lists (no other information is required) and a scheme for classifying merger tree pathologies from networks of matches made to-and-from haloes across snapshots ranging forward-and-backward in time. Focusing on SUBFIND catalogues for this work, a sweep of parameters influencing our merger tree construction yields the optimal snapshot cadence and scanning range required for converged results. Pathologies proliferate when snapshots are spaced by ≲0.128 dynamical times; conveniently similar to that needed for convergence of semi-analytical modelling, as established by Benson et al. Total merger counts are converged at the level of ∼5 per cent for friends-of-friends (FoF) haloes of size np ≳ 75 across a factor of 512 in mass resolution, but substructure rates converge more slowly with mass resolution, reaching convergence of ∼10 per cent for np ≳ 100 and particle mass mp ≲ 109 M⊙. We present analytic fits to FoF and substructure merger rates across nearly all observed galactic history (z ≤ 8.5). While we find good agreement with the results presented by Fakhouri et al. for FoF haloes, a slightly flatter dependence on merger ratio and increased major merger rates are found, reducing previously reported discrepancies with extended Press-Schechter estimates. When appropriately defined, substructure merger rates show a similar mass ratio dependence as FoF rates, but with stronger mass and redshift dependencies for their normalization.

  5. On singlet s-wave electron-hydrogen scattering.

    NASA Technical Reports Server (NTRS)

    Madan, R. N.

    1973-01-01

    Discussion of various zeroth-order approximations to s-wave scattering of electrons by hydrogen atoms below the first excitation threshold. The formalism previously developed by the author (1967, 1968) is applied to Feshbach operators to derive integro-differential equations, with the optical-potential set equal to zero, for the singlet and triplet cases. Phase shifts of s-wave scattering are computed in the zeroth-order approximation of the Feshbach operator method and in the static-exchange approximation. It is found that the convergence of numerical computations is faster in the former approximation than in the latter.

  6. IMPROVED ALGORITHMS FOR RADAR-BASED RECONSTRUCTION OF ASTEROID SHAPES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenberg, Adam H.; Margot, Jean-Luc

    We describe our implementation of a global-parameter optimizer and Square Root Information Filter into the asteroid-modeling software shape. We compare the performance of our new optimizer with that of the existing sequential optimizer when operating on various forms of simulated data and actual asteroid radar data. In all cases, the new implementation performs substantially better than its predecessor: it converges faster, produces shape models that are more accurate, and solves for spin axis orientations more reliably. We discuss potential future changes to improve shape's fitting speed and accuracy.

  7. Faster Heavy Ion Transport for HZETRN

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.

    2013-01-01

    The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.

  8. Transition-Tempered Metadynamics: Robust, Convergent Metadynamics via On-the-Fly Transition Barrier Estimation.

    PubMed

    Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A

    2014-09-09

    Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.

  9. Procedural Pain Heart Rate Responses in Massaged Preterm Infants

    PubMed Central

    Diego, Miguel A.; Field, Tiffany; Hernandez-Reif, Maria

    2009-01-01

    Heart rate (HR) responses to the removal of a monitoring lead were assessed in 56 preterm infants who received moderate pressure, light pressure or no massage therapy. The infants who received moderate pressure massage therapy exhibited lower increases in HR suggesting an attenuated pain response. The heart rate of infants who received moderate pressure massage also returned to baseline faster than the heart rate of the other two groups, suggesting a faster recovery rate. PMID:19185352

  10. Articulation rate across dialect, age, and gender

    PubMed Central

    Jacewicz, Ewa; Fox, Robert A.; O’Neill, Caitlin; Salmons, Joseph

    2009-01-01

    The understanding of sociolinguistic variation is growing rapidly, but basic gaps still remain. Whether some languages or dialects are spoken faster or slower than others constitutes such a gap. Speech tempo is interconnected with social, physical and psychological markings of speech. This study examines regional variation in articulation rate and its manifestations across speaker age, gender and speaking situations (reading vs. free conversation). The results of an experimental investigation show that articulation rate differs significantly between two regional varieties of American English examined here. A group of Northern speakers (from Wisconsin) spoke significantly faster than a group of Southern speakers (from North Carolina). With regard to age and gender, young adults read faster than older adults in both regions; in free speech, only Northern young adults spoke faster than older adults. Effects of gender were smaller and less consistent; men generally spoke slightly faster than women. As the body of work on the sociophonetics of American English continues to grow in scope and depth, we argue that it is important to include fundamental phonetic information as part of our catalog of regional differences and patterns of change in American English. PMID:20161445

  11. Robust and efficient pharmacokinetic parameter non-linear least squares estimation for dynamic contrast enhanced MRI of the prostate.

    PubMed

    Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J

    2018-05-01

    To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Direct observation of salt effects on molecular interactions through explicit-solvent molecular dynamics simulations: differential effects on electrostatic and hydrophobic interactions and comparisons to Poisson-Boltzmann theory.

    PubMed

    Thomas, Andrew S; Elcock, Adrian H

    2006-06-21

    Proteins and other biomolecules function in cellular environments that contain significant concentrations of dissolved salts and even simple salts such as NaCl can significantly affect both the kinetics and thermodynamics of macromolecular interactions. As one approach to directly observing the effects of salt on molecular associations, explicit-solvent molecular dynamics (MD) simulations have been used here to model the association of pairs of the amino acid analogues acetate and methylammonium in aqueous NaCl solutions of concentrations 0, 0.1, 0.3, 0.5, 1, and 2 M. By performing simulations of 500 ns duration for each salt concentration properly converged estimates of the free energy of interaction of the two molecules have been obtained for all intermolecular separation distances and geometries. The resulting free energy surfaces are shown to give significant new insights into the way salt modulates interactions between molecules containing both charged and hydrophobic groups and are shown to provide valuable new benchmarks for testing the description of salt effects provided by the simpler but faster Poisson-Boltzmann method. In addition, the complex many-dimensional free energy surfaces are shown to be decomposable into a number of one-dimensional effective energy functions. This decomposition (a) allows an unambiguous view of the qualitative differences between the salt dependence of electrostatic and hydrophobic interactions, (b) gives a clear rationalization for why salt exerts different effects on protein-protein association and dissociation rates, and (c) produces simplified energy functions that can be readily used in much faster Brownian dynamics simulations.

  13. How to model supernovae in simulations of star and galaxy formation

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem

    2018-06-01

    We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.

  14. Ridge-trench collision in Archean and Post-Archean crustal growth: Evidence from southern Chile

    NASA Technical Reports Server (NTRS)

    Nelson, E. P.; Forsythe, R. D.

    1988-01-01

    The growth of continental crust at convergent plate margins involves both continuous and episodic processes. Ridge-trench collision is one episodic process that can cause significant magmatic and tectonic effects on convergent plate margins. Because the sites of ridge collision (ridge-trench triple junctions) generally migrate along convergent plate boundaries, the effects of ridge collision will be highly diachronous in Andean-type orogenic belts and may not be adequately recognized in the geologic record. The Chile margin triple junction (CMTJ, 46 deg S), where the actively spreading Chile rise is colliding with the sediment-filled Peru-Chile trench, is geometrically and kinematically the simplest modern example of ridge collision. The south Chile margin illustrates the importance of the ridge-collision tectonic setting in crustal evolution at convergent margins. Similarities between ridge-collision features in southern Chile and features of Archean greenstone belts raise the question of the importance of ridge collision in Archean crustal growth. Archean plate tectonic processes were probably different than today; these differences may have affected the nature and importance of ridge collision during Archean crustal growth. In conclusion, it is suggested that smaller plates, greater ridge length, and/or faster spreading all point to the likelihood that ridge collision played a greater role in crustal growth and development of the greenstone-granite terranes during the Archean. However, the effects of modern ridge collision, and the processes involved, are not well enough known to develop specific models for the Archean ridge collison.

  15. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  16. Flight mechanics and control of escape manoeuvres in hummingbirds. I. Flight kinematics.

    PubMed

    Cheng, Bo; Tobalske, Bret W; Powers, Donald R; Hedrick, Tyson L; Wethington, Susan M; Chiu, George T C; Deng, Xinyan

    2016-11-15

    Hummingbirds are nature's masters of aerobatic manoeuvres. Previous research shows that hummingbirds and insects converged evolutionarily upon similar aerodynamic mechanisms and kinematics in hovering. Herein, we use three-dimensional kinematic data to begin to test for similar convergence of kinematics used for escape flight and to explore the effects of body size upon manoeuvring. We studied four hummingbird species in North America including two large species (magnificent hummingbird, Eugenes fulgens, 7.8 g, and blue-throated hummingbird, Lampornis clemenciae, 8.0 g) and two smaller species (broad-billed hummingbird, Cynanthus latirostris, 3.4 g, and black-chinned hummingbirds Archilochus alexandri, 3.1 g). Starting from a steady hover, hummingbirds consistently manoeuvred away from perceived threats using a drastic escape response that featured body pitch and roll rotations coupled with a large linear acceleration. Hummingbirds changed their flapping frequency and wing trajectory in all three degrees of freedom on a stroke-by-stroke basis, likely causing rapid and significant alteration of the magnitude and direction of aerodynamic forces. Thus it appears that the flight control of hummingbirds does not obey the 'helicopter model' that is valid for similar escape manoeuvres in fruit flies. Except for broad-billed hummingbirds, the hummingbirds had faster reaction times than those reported for visual feedback control in insects. The two larger hummingbird species performed pitch rotations and global-yaw turns with considerably larger magnitude than the smaller species, but roll rates and cumulative roll angles were similar among the four species. © 2016. Published by The Company of Biologists Ltd.

  17. Short‐term time step convergence in a climate model

    PubMed Central

    Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane

    2015-01-01

    Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669

  18. Single neuropsychological test scores associated with rate of cognitive decline in early Alzheimer disease.

    PubMed

    Parikh, Mili; Hynan, Linda S; Weiner, Myron F; Lacritz, Laura; Ringe, Wendy; Cullum, C Munro

    2014-01-01

    Alzheimer disease (AD) characteristically begins with episodic memory impairment followed by other cognitive deficits; however, the course of illness varies, with substantial differences in the rate of cognitive decline. For research and clinical purposes it would be useful to distinguish between persons who will progress slowly from persons who will progress at an average or faster rate. Our objective was to use neurocognitive performance features and disease-specific and health information to determine a predictive model for the rate of cognitive decline in participants with mild AD. We reviewed the records of a series of 96 consecutive participants with mild AD from 1995 to 2011 who had been administered selected neurocognitive tests and clinical measures. Based on Clinical Dementia Rating (CDR) of functional and cognitive decline over 2 years, participants were classified as Faster (n = 45) or Slower (n = 51) Progressors. Stepwise logistic regression analyses using neurocognitive performance features, disease-specific, health, and demographic variables were performed. Neuropsychological scores that distinguished Faster from Slower Progressors included Trail Making Test - A, Digit Symbol, and California Verbal Learning Test (CVLT) Total Learned and Primacy Recall. No disease-specific, health, or demographic variable predicted rate of progression; however, history of heart disease showed a trend. Among the neuropsychological variables, Trail Making Test - A best distinguished Faster from Slower Progressors, with an overall accuracy of 68%. In an omnibus model including neuropsychological, disease-specific, health, and demographic variables, only Trail Making Test - A distinguished between groups. Several neuropsychological performance features were associated with the rate of cognitive decline in mild AD, with baseline Trail Making Test - A performance best separating those who declined at an average or faster rate from those who showed slower progression.

  19. Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2003-01-01

    A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.

  20. Improved dichotomous search frequency offset estimator for burst-mode continuous phase modulation

    NASA Astrophysics Data System (ADS)

    Zhai, Wen-Chao; Li, Zan; Si, Jiang-Bo; Bai, Jun

    2015-11-01

    A data-aided technique for carrier frequency offset estimation with continuous phase modulation (CPM) in burst-mode transmission is presented. The proposed technique first exploits a special pilot sequence, or training sequence, to form a sinusoidal waveform. Then, an improved dichotomous search frequency offset estimator is introduced to determine the frequency offset using the sinusoid. Theoretical analysis and simulation results indicate that our estimator is noteworthy in the following aspects. First, the estimator can operate independently of timing recovery. Second, it has relatively low outlier, i.e., the minimum signal-to-noise ratio (SNR) required to guarantee estimation accuracy. Finally, the most important property is that our estimator is complexity-reduced compared to the existing dichotomous search methods: it eliminates the need for fast Fourier transform (FFT) and modulation removal, and exhibits faster convergence rate without accuracy degradation. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011), and the Programme of Introducing Talents of Discipline to Universities, China (Grant No. B08038).

  1. Analysis of sensorless control of brushless DC motor using unknown input observer with different gains

    NASA Astrophysics Data System (ADS)

    Astik, Mitesh B.; Bhatt, Praghnesh; Bhalja, Bhavesh R.

    2017-03-01

    A sensorless control scheme based on an unknown input observer is presented in this paper in which back EMF of the Brushless DC Motor (BLDC) is continuously estimated from available line voltages and currents. During negative rotation of motor, actual and estimated speed fail to track the reference speed and if the corrective action is not taken by the observer, the motor goes into saturation. To overcome this problem, the speed estimation algorithm has been implemented in this paper to control the dynamic behavior of the motor during negative rotation. The Ackermans method was used to calculate the gains of an unknown input observer which is based on the appropriate choice of the eigenvalues in advance. The criteria to choose eigenvalue is to obtain a balance between faster convergence rate and the least noise level. Simulations have been carried out for different disturbances such as step changes in motor reference speed and load torque. The comparative simulation results clearly depict that the disturbance effects in actual and estimated responses minimizes as observer gain setting increases.

  2. Smoothing of cost function leads to faster convergence of neural network learning

    NASA Astrophysics Data System (ADS)

    Xu, Li-Qun; Hall, Trevor J.

    1994-03-01

    One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.

  3. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification

    PubMed Central

    Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128

  4. Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.

    PubMed

    Pang, Shan; Yang, Xinyi

    2016-01-01

    In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.

  5. Faster-X Evolution of Gene Expression in Drosophila

    PubMed Central

    Meisel, Richard P.; Malone, John H.; Clark, Andrew G.

    2012-01-01

    DNA sequences on X chromosomes often have a faster rate of evolution when compared to similar loci on the autosomes, and well articulated models provide reasons why the X-linked mode of inheritance may be responsible for the faster evolution of X-linked genes. We analyzed microarray and RNA–seq data collected from females and males of six Drosophila species and found that the expression levels of X-linked genes also diverge faster than autosomal gene expression, similar to the “faster-X” effect often observed in DNA sequence evolution. Faster-X evolution of gene expression was recently described in mammals, but it was limited to the evolutionary lineages shortly following the creation of the therian X chromosome. In contrast, we detect a faster-X effect along both deep lineages and those on the tips of the Drosophila phylogeny. In Drosophila males, the dosage compensation complex (DCC) binds the X chromosome, creating a unique chromatin environment that promotes the hyper-expression of X-linked genes. We find that DCC binding, chromatin environment, and breadth of expression are all predictive of the rate of gene expression evolution. In addition, estimates of the intraspecific genetic polymorphism underlying gene expression variation suggest that X-linked expression levels are not under relaxed selective constraints. We therefore hypothesize that the faster-X evolution of gene expression is the result of the adaptive fixation of beneficial mutations at X-linked loci that change expression level in cis. This adaptive faster-X evolution of gene expression is limited to genes that are narrowly expressed in a single tissue, suggesting that relaxed pleiotropic constraints permit a faster response to selection. Finally, we present a conceptional framework to explain faster-X expression evolution, and we use this framework to examine differences in the faster-X effect between Drosophila and mammals. PMID:23071459

  6. Temporal Stability and Convergent Validity of the Behavior Assessment System for Children.

    ERIC Educational Resources Information Center

    Merydith, Scott P.

    2001-01-01

    Assesses the temporal stability and convergent validity of the Behavioral Assessment System for Children (BASC). Teachers and parents rated kindergarten and first-grade students using BASC. Teachers were more stable in rating children's externalizing behaviors and attention problems. Discusses results in terms of the accuracy of information…

  7. Evaluating the Convergence of Muscle Appearance Attitude Measures

    ERIC Educational Resources Information Center

    Cafri, Guy; Thompson, J. Kevin

    2004-01-01

    There has been growing interest in the assessment of a muscular appearance. Given the importance of assessing muscle appearance attitudes, the aim of this study was to explore the convergence of the Drive for Muscularity Scale, Somatomorphic Matrix, Contour Drawing Rating Scale, Male Figure Drawings, and the Muscularity Rating Scale. Participants…

  8. Direct measurement of asperity contact growth in quartz at hydrothermal conditions

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Hickman, S. H.

    2008-12-01

    Room-temperature friction and indentation experiments suggest fault strengthening during the interseismic period results from increases in asperity contact area due to solid-state deformation. However, field observations on exhumed fault zones indicate that solution-transport processes, pressure solution, crack healing and contact overgrowth, influence fault zone rheology near the base of the seismogenic zone. Contact overgrowths result from gradients in surface curvature, where material is dissolved from the pore walls, diffuses through the fluid and precipitates at the contact between two asperities, cementing the asperities together without convergence normal to the contact. To determine the mechanisms and kinetics of asperity cementation, we conducted laboratory experiments in which convex and flat lenses prepared from quartz single crystals were pressed together in an externally heated pressure vessel equipped with an optical observation port. Convergence between the two lenses and contact morphology were continuously monitored during these experiments using reflected-light interferometry through a long-working-distance microscope. Contact normal force is constant with an initial effective normal stress of 1.7 MPa. Four single-phase experiments were conducted at temperatures between 350 and 530C at 150 MPa water pressure, along with two controls: one single phase, dry at 425C and one bimaterial (qtz/sapphire) at 425C and 150 MPa water pressure. No contact growth or convergence was observed in either of the controls. For wet single-phase contacts, however, growth was initially rapid and then decreased with time following an inverse squared dependence of contact radius on aperture. No convergence was observed over the duration of these experiments, suggesting that neither significant pressure solution nor crystal plasticity occurred at these stresses and temperatures. The formation of fluid inclusions between the lenses indicate that the contact is not uniformly wetted. The contact is bounded by small regions of high aperture, reflecting local free-face dissolution as the source for the overgrowth, a definitive indication of diffusion-limited growth. Diffusion-limited growth is also consistent with the inverse squared aperture dependence. However, the apparent activation energy is ~125 kJ/mol, much higher than expected for silica diffusion in bulk water; at present we do not have a complete explanation for the high activation energy. When our lab-measured overgrowth rates are extrapolated to the 5 to 30 micron radius contacts inferred from near-field recordings of M-2 sized earthquakes in deep drill holes and mines (i.e., SAFOD and NELSAM), we predict rates of contact area increase that are orders of magnitude faster than seen in dry, room-temperature friction experiments. This suggests that natural strength recovery should be dominated by fluid-assisted processes at hypocentral conditions near the base of the seismogenic zone.

  9. Psychometric properties of the mobility inventory for agoraphobia: convergent, discriminant, and criterion-related validity.

    PubMed

    Chambless, Dianne L; Sharpless, Brian A; Rodriguez, Dianeth; McCarthy, Kevin S; Milrod, Barbara L; Khalsa, Shabad-Ratan; Barber, Jacques P

    2011-12-01

    Aims of this study were (a) to summarize the psychometric literature on the Mobility Inventory for Agoraphobia (MIA), (b) to examine the convergent and discriminant validity of the MIA's Avoidance Alone and Avoidance Accompanied rating scales relative to clinical severity ratings of anxiety disorders from the Anxiety Disorders Interview Schedule (ADIS), and (c) to establish a cutoff score indicative of interviewers' diagnosis of agoraphobia for the Avoidance Alone scale. A meta-analytic synthesis of 10 published studies yielded positive evidence for internal consistency and convergent and discriminant validity of the scales. Participants in the present study were 129 people with a diagnosis of panic disorder. Internal consistency was excellent for this sample, α=.95 for AAC and .96 for AAL. When the MIA scales were correlated with interviewer ratings, evidence for convergent and discriminant validity for AAL was strong (convergent r with agoraphobia severity ratings=.63 vs. discriminant rs of .10-.29 for other anxiety disorders) and more modest but still positive for AAC (.54 vs. .01-.37). Receiver operating curve analysis indicated that the optimal operating point for AAL as an indicator of ADIS agoraphobia diagnosis was 1.61, which yielded sensitivity of .87 and specificity of .73. Copyright © 2011. Published by Elsevier Ltd.

  10. Psychometric Properties of the Mobility Inventory for Agoraphobia: Convergent, Discriminant, and Criterion-Related Validity

    PubMed Central

    Chambless, Dianne L.; Sharpless, Brian A.; Rodriguez, Dianeth; McCarthy, Kevin S.; Milrod, Barbara L.; Khalsa, Shabad-Ratan; Barber, Jacques P.

    2012-01-01

    Aims of this study were (a) to summarize the psychometric literature on the Mobility Inventory for Agoraphobia (MIA), (b) to examine the convergent and discriminant validity of the MIA’s Avoidance Alone and Avoidance Accompanied rating scales relative to clinical severity ratings of anxiety disorders from the Anxiety Disorders Interview Schedule (ADIS), and (c) to establish a cutoff score indicative of interviewers’ diagnosis of agoraphobia for the Avoidance Alone scale. A meta-analytic synthesis of 10 published studies yielded positive evidence for internal consistency and convergent and discriminant validity of the scales. Participants in the present study were 129 people with a diagnosis of panic disorder. Internal consistency was excellent for this sample, α = .95 for AAC and .96 for AAL. When the MIA scales were correlated with interviewer ratings, evidence for convergent and discriminant validity for AAL was strong (convergent r with agoraphobia severity ratings = .63 vs. discriminant rs of .10-.29 for other anxiety disorders) and more modest but still positive for AAC (.54 vs. .01-.37). Receiver operating curve analysis indicated that the optimal operating point for AAL as an indicator of ADIS agoraphobia diagnosis was 1.61, which yielded sensitivity of .87 and specificity of .73. PMID:22035997

  11. Cognitive Load Reduces Perceived Linguistic Convergence Between Dyads.

    PubMed

    Abel, Jennifer; Babel, Molly

    2017-09-01

    Speech convergence is the tendency of talkers to become more similar to someone they are listening or talking to, whether that person is a conversational partner or merely a voice heard repeating words. To elucidate the nature of the mechanisms underlying convergence, this study uses different levels of task difficulty on speech convergence within dyads collaborating on a task. Dyad members had to build identical LEGO® constructions without being able to see each other's construction, and with each member having half of the instructions required to complete the construction. Three levels of task difficulty were created, with five dyads at each level (30 participants total). Task difficulty was also measured using completion time and error rate. Listeners who heard pairs of utterances from each dyad judged convergence to be occurring in the Easy condition and to a lesser extent in the Medium condition, but not in the Hard condition. Amplitude envelope acoustic similarity analyses of the same utterance pairs showed that convergence occurred in dyads with shorter completion times and lower error rates. Together, these results suggest that while speech convergence is a highly variable behavior, it may occur more in contexts of low cognitive load. The relevance of these results for the current automatic and socially-driven models of convergence is discussed.

  12. Trait and State Variance in Oppositional Defiant Disorder Symptoms: A Multi-Source Investigation with Spanish Children

    PubMed Central

    Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu

    2016-01-01

    The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784

  13. Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation

    NASA Astrophysics Data System (ADS)

    Blumenthal, Benjamin J.; Zhan, Hongbin

    2016-08-01

    We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.

  14. Control of heart rate during thermoregulation in the heliothermic lizard Pogona barbata: importance of cholinergic and adrenergic mechanisms.

    PubMed

    Seebacher, F; Franklin, C E

    2001-12-01

    During thermoregulation in the bearded dragon Pogona barbata, heart rate when heating is significantly faster than when cooling at any given body temperature (heart rate hysteresis), resulting in faster rates of heating than cooling. However, the mechanisms that control heart rate during heating and cooling are unknown. The aim of this study was to test the hypothesis that changes in cholinergic and adrenergic tone on the heart are responsible for the heart rate hysteresis during heating and cooling in P. barbata. Heating and cooling trials were conducted before and after the administration of atropine, a muscarinic antagonist, and sotalol, a beta-adrenergic antagonist. Cholinergic and beta-adrenergic blockade did not abolish the heart rate hysteresis, as the heart rate during heating was significantly faster than during cooling in all cases. Adrenergic tone was extremely high (92.3 %) at the commencement of heating, and decreased to 30.7 % at the end of the cooling period. Moreover, in four lizards there was an instantaneous drop in heart rate (up to 15 beats min(-1)) as the heat source was switched off, and this drop in heart rate coincided with either a drop in beta-adrenergic tone or an increase in cholinergic tone. Rates of heating were significantly faster during the cholinergic blockade, and least with a combined cholinergic and beta-adrenergic blockade. The results showed that cholinergic and beta-adrenergic systems are not the only control mechanisms acting on the heart during heating and cooling, but they do have a significant effect on heart rate and on rates of heating and cooling.

  15. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A globally convergent MC algorithm with an adaptive learning rate.

    PubMed

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  17. System-size convergence of point defect properties: The case of the silicon vacancy

    NASA Astrophysics Data System (ADS)

    Corsetti, Fabiano; Mostofi, Arash A.

    2011-07-01

    We present a comprehensive study of the vacancy in bulk silicon in all its charge states from 2+ to 2-, using a supercell approach within plane-wave density-functional theory, and systematically quantify the various contributions to the well-known finite size errors associated with calculating formation energies and stable charge state transition levels of isolated defects with periodic boundary conditions. Furthermore, we find that transition levels converge faster with respect to supercell size when only the Γ-point is sampled in the Brillouin zone, as opposed to a dense k-point sampling. This arises from the fact that defect level at the Γ-point quickly converges to a fixed value which correctly describes the bonding at the defect center. Our calculated transition levels with 1000-atom supercells and Γ-point only sampling are in good agreement with available experimental results. We also demonstrate two simple and accurate approaches for calculating the valence band offsets that are required for computing formation energies of charged defects, one based on a potential averaging scheme and the other using maximally-localized Wannier functions (MLWFs). Finally, we show that MLWFs provide a clear description of the nature of the electronic bonding at the defect center that verifies the canonical Watkins model.

  18. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  19. Zipf rank approach and cross-country convergence of incomes

    NASA Astrophysics Data System (ADS)

    Shao, Jia; Ivanov, Plamen Ch.; Urošević, Branko; Stanley, H. Eugene; Podobnik, Boris

    2011-05-01

    We employ a concept popular in physics —the Zipf rank approach— in order to estimate the number of years that EU members would need in order to achieve "convergence" of their per capita incomes. Assuming that trends in the past twenty years continue to hold in the future, we find that after t≈30 years both developing and developed EU countries indexed by i will have comparable values of their per capita gross domestic product {\\cal G}_{i,t} . Besides the traditional Zipf rank approach we also propose a weighted Zipf rank method. In contrast to the EU block, on the world level the Zipf rank approach shows that, between 1960 and 2009, cross-country income differences increased over time. For a brief period during the 2007-2008 global economic crisis, at world level the {\\cal G}_{i,t} of richer countries declined more rapidly than the {\\cal G}_{i,t} of poorer countries, in contrast to EU where the {\\cal G}_{i,t} of developing EU countries declined faster than the {\\cal G}_{i,t} of developed EU countries, indicating that the recession interrupted the convergence between EU members. We propose a simple model of GDP evolution that accounts for the scaling we observe in the data.

  20. The multi-reference retaining the excitation degree perturbation theory: A size-consistent, unitary invariant, and rapidly convergent wavefunction based ab initio approach

    NASA Astrophysics Data System (ADS)

    Fink, Reinhold F.

    2009-02-01

    The retaining the excitation degree (RE) partitioning [R.F. Fink, Chem. Phys. Lett. 428 (2006) 461(20 September)] is reformulated and applied to multi-reference cases with complete active space (CAS) reference wave functions. The generalised van Vleck perturbation theory is employed to set up the perturbation equations. It is demonstrated that this leads to a consistent and well defined theory which fulfils all important criteria of a generally applicable ab initio method: The theory is proven numerically and analytically to be size-consistent and invariant with respect to unitary orbital transformations within the inactive, active and virtual orbital spaces. In contrast to most previously proposed multi-reference perturbation theories the necessary condition for a proper perturbation theory to fulfil the zeroth order perturbation equation is exactly satisfied with the RE partitioning itself without additional projectors on configurational spaces. The theory is applied to several excited states of the benchmark systems CH2 , SiH2 , and NH2 , as well as to the lowest states of the carbon, nitrogen and oxygen atoms. In all cases comparisons are made with full configuration interaction results. The multi-reference (MR)-RE method is shown to provide very rapidly converging perturbation series. Energy differences between states of similar configurations converge even faster.

  1. Hybrid spiral-dynamic bacteria-chemotaxis algorithm with application to control two-wheeled machines.

    PubMed

    Goher, K M; Almeshal, A M; Agouri, S A; Nasir, A N K; Tokhi, M O; Alenezi, M R; Al Zanki, T; Fadlallah, S O

    2017-01-01

    This paper presents the implementation of the hybrid spiral-dynamic bacteria-chemotaxis (HSDBC) approach to control two different configurations of a two-wheeled vehicle. The HSDBC is a combination of bacterial chemotaxis used in bacterial forging algorithm (BFA) and the spiral-dynamic algorithm (SDA). BFA provides a good exploration strategy due to the chemotaxis approach. However, it endures an oscillation problem near the end of the search process when using a large step size. Conversely; for a small step size, it affords better exploitation and accuracy with slower convergence. SDA provides better stability when approaching an optimum point and has faster convergence speed. This may cause the search agents to get trapped into local optima which results in low accurate solution. HSDBC exploits the chemotactic strategy of BFA and fitness accuracy and convergence speed of SDA so as to overcome the problems associated with both the SDA and BFA algorithms alone. The HSDBC thus developed is evaluated in optimizing the performance and energy consumption of two highly nonlinear platforms, namely single and double inverted pendulum-like vehicles with an extended rod. Comparative results with BFA and SDA show that the proposed algorithm is able to result in better performance of the highly nonlinear systems.

  2. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  3. [Preliminary application of an improved Demons deformable registration algorithm in tumor radiotherapy].

    PubMed

    Zhou, Lu; Zhen, Xin; Lu, Wenting; Dou, Jianhong; Zhou, Linghong

    2012-01-01

    To validate the efficiency of an improved Demons deformable registration algorithm and evaluate its application in registration of the treatment image and the planning image in image-guided radiotherapy (IGRT). Based on Brox's gradient constancy assumption and Malis's efficient second-order minimization algorithm, a grey value gradient similarity term was added into the original energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function for automatic determination of the iteration number. The proposed algorithm was validated using mathematically deformed images, physically deformed phantom images and clinical tumor images. Compared with the original Additive Demons algorithm, the improved Demons algorithm achieved a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. The improved Demons algorithm can achieve faster and more accurate radiotherapy.

  4. Non-perturbative calculation of orbital and spin effects in molecules subject to non-uniform magnetic fields

    NASA Astrophysics Data System (ADS)

    Sen, Sangita; Tellgren, Erik I.

    2018-05-01

    External non-uniform magnetic fields acting on molecules induce non-collinear spin densities and spin-symmetry breaking. This necessitates a general two-component Pauli spinor representation. In this paper, we report the implementation of a general Hartree-Fock method, without any spin constraints, for non-perturbative calculations with finite non-uniform fields. London atomic orbitals are used to ensure faster basis convergence as well as invariance under constant gauge shifts of the magnetic vector potential. The implementation has been applied to investigate the joint orbital and spin response to a field gradient—quantified through the anapole moments—of a set of small molecules. The relative contributions of orbital and spin-Zeeman interaction terms have been studied both theoretically and computationally. Spin effects are stronger and show a general paramagnetic behavior for closed shell molecules while orbital effects can have either direction. Basis set convergence and size effects of anapole susceptibility tensors have been reported. The relation of the mixed anapole susceptibility tensor to chirality is also demonstrated.

  5. Optimized norm-conserving Hartree-Fock pseudopotentials for plane-wave calculations

    NASA Astrophysics Data System (ADS)

    Al-Saidi, W. A.; Walter, E. J.; Rappe, A. M.

    2008-02-01

    We report Hartree-Fock (HF)-based pseudopotentials suitable for plane-wave calculations. Unlike typical effective core potentials, the present pseudopotentials are finite at the origin and exhibit rapid convergence in a plane-wave basis; the optimized pseudopotential method [A. M. Rappe , Phys. Rev. B 41, 1227 (1990)] improves plane-wave convergence. Norm-conserving HF pseudopotentials are found to develop long-range non-Coulombic behavior which does not decay faster than 1/r , and is nonlocal. This behavior, which stems from the nonlocality of the exchange potential, is remedied using a recently developed self-consistent procedure [J. R. Trail and R. J. Needs, J. Chem. Phys. 122, 014112 (2005)]. The resulting pseudopotentials slightly violate the norm conservation of the core charge. We calculated several atomic properties using these pseudopotentials, and the results are in good agreement with all-electron HF values. The dissociation energies, equilibrium bond lengths, and frequencies of vibration of several dimers obtained with these HF pseudopotentials and plane waves are also in good agreement with all-electron results.

  6. An Effective Hybrid Evolutionary Algorithm for Solving the Numerical Optimization Problems

    NASA Astrophysics Data System (ADS)

    Qian, Xiaohong; Wang, Xumei; Su, Yonghong; He, Liu

    2018-04-01

    There are many different algorithms for solving complex optimization problems. Each algorithm has been applied successfully in solving some optimization problems, but not efficiently in other problems. In this paper the Cauchy mutation and the multi-parent hybrid operator are combined to propose a hybrid evolutionary algorithm based on the communication (Mixed Evolutionary Algorithm based on Communication), hereinafter referred to as CMEA. The basic idea of the CMEA algorithm is that the initial population is divided into two subpopulations. Cauchy mutation operators and multiple paternal crossover operators are used to perform two subpopulations parallelly to evolve recursively until the downtime conditions are met. While subpopulation is reorganized, the individual is exchanged together with information. The algorithm flow is given and the performance of the algorithm is compared using a number of standard test functions. Simulation results have shown that this algorithm converges significantly faster than FEP (Fast Evolutionary Programming) algorithm, has good performance in global convergence and stability and is superior to other compared algorithms.

  7. Computation of neutron fluxes in clusters of fuel pins arranged in hexagonal assemblies (2D and 3D)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabha, H.; Marleau, G.

    2012-07-01

    For computations of fluxes, we have used Carvik's method of collision probabilities. This method requires tracking algorithms. An algorithm to compute tracks (in 2D and 3D) has been developed for seven hexagonal geometries with cluster of fuel pins. This has been implemented in the NXT module of the code DRAGON. The flux distribution in cluster of pins has been computed by using this code. For testing the results, they are compared when possible with the EXCELT module of the code DRAGON. Tracks are plotted in the NXT module by using MATLAB, these plots are also presented here. Results are presentedmore » with increasing number of lines to show the convergence of these results. We have numerically computed volumes, surface areas and the percentage errors in these computations. These results show that 2D results converge faster than 3D results. The accuracy on the computation of fluxes up to second decimal is achieved with fewer lines. (authors)« less

  8. Robust and fast-converging level set method for side-scan sonar image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Li, Qingwu; Huo, Guanying

    2017-11-01

    A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.

  9. Topological analysis of polymeric melts: chain-length effects and fast-converging estimators for entanglement length.

    PubMed

    Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin

    2009-09-01

    Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.

  10. Phonon-limited carrier mobility and resistivity from carbon nanotubes to graphene

    NASA Astrophysics Data System (ADS)

    Li, Jing; Miranda, Henrique Pereira Coutada; Niquet, Yann-Michel; Genovese, Luigi; Duchemin, Ivan; Wirtz, Ludger; Delerue, Christophe

    2015-08-01

    Under which conditions do the electrical transport properties of one-dimensional (1D) carbon nanotubes (CNTs) and 2D graphene become equivalent? We have performed atomistic calculations of the phonon-limited electrical mobility in graphene and in a wide range of CNTs of different types to address this issue. The theoretical study is based on a tight-binding method and a force-constant model from which all possible electron-phonon couplings are computed. The electrical resistivity of graphene is found in very good agreement with experiments performed at high carrier density. A common methodology is applied to study the transition from one to two dimensions by considering CNTs with diameter up to 16 nm. It is found that the mobility in CNTs of increasing diameter converges to the same value, i.e., the mobility in graphene. This convergence is much faster at high temperature and high carrier density. For small-diameter CNTs, the mobility depends strongly on chirality, diameter, and the existence of a band gap.

  11. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  12. Stochastic Spectral Descent for Discrete Graphical Models

    DOE PAGES

    Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...

    2015-12-14

    Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less

  13. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  14. A new look at the convergence of a famous sequence

    NASA Astrophysics Data System (ADS)

    Dobrescu, Mihaela

    2010-12-01

    A new proof for the monotonicity of the sequence ? is given as a special case of a large family of monotomic and bounded, hence convergent sequences. The new proof is based on basic calculus results rather than induction, which makes it accessible to a larger audience including business and life sciences students and faculty. The slow rate of convergence of the two sequences is also discussed, and convergence bounds are found.

  15. The survey of preconditioners used for accelerating the rate of convergence in the Gauss-Seidel method

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Harada, Kyouji; Morimoto, Munenori; Sakakihara, Michio

    2004-03-01

    Several preconditioned iterative methods reported in the literature have been used for improving the convergence rate of the Gauss-Seidel method. In this article, on the basis of nonnegative matrix, comparisons between some splittings for such preconditioned matrices are derived. Simple numerical examples are also given.

  16. The effects of sex-biased gene expression and X-linkage on rates of sequence evolution in Drosophila.

    PubMed

    Campos, José Luis; Johnston, Keira; Charlesworth, Brian

    2017-12-08

    A faster rate of adaptive evolution of X-linked genes compared with autosomal genes (the faster-X effect) can be caused by the fixation of recessive or partially recessive advantageous mutations. This effect should be largest for advantageous mutations that affect only male fitness, and least for mutations that affect only female fitness. We tested these predictions in Drosophila melanogaster by using coding and functionally significant non-coding sequences of genes with different levels of sex-biased expression. Consistent with theory, nonsynonymous substitutions in most male-biased and unbiased genes show faster adaptive evolution on the X. However, genes with very low recombination rates do not show such an effect, possibly as a consequence of Hill-Robertson interference. Contrary to expectation, there was a substantial faster-X effect for female-biased genes. After correcting for recombination rate differences, however, female-biased genes did not show a faster X-effect. Similar analyses of non-coding UTRs and long introns showed a faster-X effect for all groups of genes, other than introns of female-biased genes. Given the strong evidence that deleterious mutations are mostly recessive or partially recessive, we would expect a slower rate of evolution of X-linked genes for slightly deleterious mutations that become fixed by genetic drift. Surprisingly, we found little evidence for this after correcting for recombination rate, implying that weakly deleterious mutations are mostly close to being semidominant. This is consistent with evidence from polymorphism data, which we use to test how models of selection that assume semidominance with no sex-specific fitness effects may bias estimates of purifying selection. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Dental development in living and fossil orangutans.

    PubMed

    Smith, Tanya M

    2016-05-01

    Numerous studies have investigated molar development in extant and fossil hominoids, yet relatively little is known about orangutans, the only great ape with an extensive fossil record. This study characterizes aspects of dental development, including cuspal enamel daily secretion rate, long-period line periodicities, cusp-specific molar crown formation times and extension rates, and initiation and completion ages in living and fossil orangutan postcanine teeth. Daily secretion rate and periodicities in living orangutans are similar to previous reports, while crown formation times often exceed published values, although direct comparisons are limited. One wild Bornean individual died at 4.5 years of age with fully erupted first molars (M1s), while a captive individual and a wild Sumatran individual likely erupted their M1s around five or six years of age. These data underscore the need for additional samples of orangutans of known sex, species, and developmental environment to explore potential sources of variation in molar emergence and their relationship to life history variables. Fossil orangutans possess larger crowns than living orangutans, show similarities in periodicities, and have faster daily secretion rate, longer crown formation times, and slower extension rates. Molar crown formation times exceed reported values for other fossil apes, including Gigantopithecus blacki. When compared to African apes, both living and fossil orangutans show greater cuspal enamel thickness values and periodicities, resulting in longer crown formation times and slower extension rates. Several of these variables are similar to modern humans, representing examples of convergent evolution. Molar crown formation does not appear to be equivalent among extant great apes or consistent within living and fossil members of Pongo or Homo. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  18. Operator induced multigrid algorithms using semirefinement

    NASA Technical Reports Server (NTRS)

    Decker, Naomi; Vanrosendale, John

    1989-01-01

    A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.

  19. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  20. Federated learning of predictive models from federated Electronic Health Records.

    PubMed

    Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei

    2018-04-01

    In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed algorithm. In both cases, it achieves similar prediction accuracy measured by the Area Under the Receiver Operating Characteristic Curve (AUC) of the classifier. We extract important features discovered by the algorithm that are predictive of future hospitalizations, thus providing a way to interpret the classification results and inform prevention efforts. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Assessments of astronaut effectiveness

    NASA Technical Reports Server (NTRS)

    Rose, Robert M.; Helmreich, Robert L.; Fogg, Louis; Mcfadden, Terry J.

    1993-01-01

    This study examined the reliability and convergent validity of three methods of peer and supervisory ratings of the effectiveness of individual NASA astronauts and their relationships with flight assignments. These two techniques were found to be reliable and relatively convergent. Seniority and a peer-rated Performance and Competence factor proved to be most closely associated with flight assignments, while supervisor ratings and a peer-rated Group Living and Personality factor were found to be unrelated. Results have implications for the selection and training of astronauts.

  2. Colder environments did not select for a faster metabolism during experimental evolution of Drosophila melanogaster.

    PubMed

    Alton, Lesley A; Condon, Catriona; White, Craig R; Angilletta, Michael J

    2017-01-01

    The effect of temperature on the evolution of metabolism has been the subject of debate for a century; however, no consistent patterns have emerged from comparisons of metabolic rate within and among species living at different temperatures. We used experimental evolution to determine how metabolism evolves in populations of Drosophila melanogaster exposed to one of three selective treatments: a constant 16°C, a constant 25°C, or temporal fluctuations between 16 and 25°C. We tested August Krogh's controversial hypothesis that colder environments select for a faster metabolism. Given that colder environments also experience greater seasonality, we also tested the hypothesis that temporal variation in temperature may be the factor that selects for a faster metabolism. We measured the metabolic rate of flies from each selective treatment at 16, 20.5, and 25°C. Although metabolism was faster at higher temperatures, flies from the selective treatments had similar metabolic rates at each measurement temperature. Based on variation among genotypes within populations, heritable variation in metabolism was likely sufficient for adaptation to occur. We conclude that colder or seasonal environments do not necessarily select for a faster metabolism. Rather, other factors besides temperature likely contribute to patterns of metabolic rate over thermal clines in nature. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  3. High Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration

    NASA Technical Reports Server (NTRS)

    Scott, James R.; Martini, Michael C.

    2008-01-01

    Taylor series integration is implemented in a spacecraft trajectory analysis code-the Spacecraft N-body Analysis Program (SNAP) - and compared with the code s existing eighth-order Runge-Kutta Fehlberg time integration scheme. Nine trajectory problems, including near Earth, lunar, Mars and Europa missions, are analyzed. Head-to-head comparison at five different error tolerances shows that, on average, Taylor series is faster than Runge-Kutta Fehlberg by a factor of 15.8. Results further show that Taylor series has superior convergence properties. Taylor series integration proves that it can provide rapid, highly accurate solutions to spacecraft trajectory problems.

  4. An effective hybrid firefly algorithm with harmony search for global numerical optimization.

    PubMed

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods.

  5. Parameter estimation of a pulp digester model with derivative-free optimization strategies

    NASA Astrophysics Data System (ADS)

    Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.

    2017-07-01

    The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.

  6. A Method to Compute the Force Signature of a Body Impacting on a Linear Elastic Structure Using Fourier Analysis

    DTIC Science & Technology

    1982-09-17

    FK * 1PK (2) The convolution of two transforms in time domain is the inverse transform of the product in frequency domain. Thus Rp(s) - Fgc() Ipg(*) (3...its inverse transform by: R,(r)- R,(a.)e’’ do. (5)2w In order to nuke use f a very accurate numerical method to ompute Fourier "ke and coil...taorm. When the inverse transform it tken by using Eq. (15), the cosine transform, because it converges faster than the sine transform refu-ft the

  7. Diffractive shear interferometry for extreme ultraviolet high-resolution lensless imaging

    NASA Astrophysics Data System (ADS)

    Jansen, G. S. M.; de Beurs, A.; Liu, X.; Eikema, K. S. E.; Witte, S.

    2018-05-01

    We demonstrate a novel imaging approach and associated reconstruction algorithm for far-field coherent diffractive imaging, based on the measurement of a pair of laterally sheared diffraction patterns. The differential phase profile retrieved from such a measurement leads to improved reconstruction accuracy, increased robustness against noise, and faster convergence compared to traditional coherent diffractive imaging methods. We measure laterally sheared diffraction patterns using Fourier-transform spectroscopy with two phase-locked pulse pairs from a high harmonic source. Using this approach, we demonstrate spectrally resolved imaging at extreme ultraviolet wavelengths between 28 and 35 nm.

  8. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    NASA Astrophysics Data System (ADS)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  9. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  10. Oscillatory supersonic kernel function method for interfering surfaces

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    In the method presented in this paper, a collocation technique is used with the nonplanar supersonic kernel function to solve multiple lifting surface problems with interference in steady or oscillatory flow. The pressure functions used are based on conical flow theory solutions and provide faster solution convergence than is possible with conventional functions. In the application of the nonplanar supersonic kernel function, an improper integral of a 3/2 power singularity along the Mach hyperbola is described and treated. The method is compared with other theories and experiment for two wing-tail configurations in steady and oscillatory flow.

  11. Hundreds of Genes Experienced Convergent Shifts in Selective Pressure in Marine Mammals

    PubMed Central

    Chikina, Maria; Robinson, Joseph D.; Clark, Nathan L.

    2016-01-01

    Abstract Mammal species have made the transition to the marine environment several times, and their lineages represent one of the classical examples of convergent evolution in morphological and physiological traits. Nevertheless, the genetic mechanisms of their phenotypic transition are poorly understood, and investigations into convergence at the molecular level have been inconclusive. While past studies have searched for convergent changes at specific amino acid sites, we propose an alternative strategy to identify those genes that experienced convergent changes in their selective pressures, visible as changes in evolutionary rate specifically in the marine lineages. We present evidence of widespread convergence at the gene level by identifying parallel shifts in evolutionary rate during three independent episodes of mammalian adaptation to the marine environment. Hundreds of genes accelerated their evolutionary rates in all three marine mammal lineages during their transition to aquatic life. These marine-accelerated genes are highly enriched for pathways that control recognized functional adaptations in marine mammals, including muscle physiology, lipid-metabolism, sensory systems, and skin and connective tissue. The accelerations resulted from both adaptive evolution as seen in skin and lung genes, and loss of function as in gustatory and olfactory genes. In regard to sensory systems, this finding provides further evidence that reduced senses of taste and smell are ubiquitous in marine mammals. Our analysis demonstrates the feasibility of identifying genes underlying convergent organism-level characteristics on a genome-wide scale and without prior knowledge of adaptations, and provides a powerful approach for investigating the physiological functions of mammalian genes. PMID:27329977

  12. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with a Gaussian weight function (SWMI-GW) was tested between two different imaging modalities: CT and MRI image sets. SWMI-GW converges 10% faster than registration using mutual information with an ROI. SWMI-GW as well as SWMI with SOI-based weight function (SWMI-SOI) shows better compensation of the target organ's deformation and neighboring critical organs' deformation. SWMI-GW was also used to successfully fuse MRI and CT images. Rigid-body image registration using our SWMI-GW and SWMI-SOI as cost functions can achieve better registration results in (a) designated image region(s) as well as faster convergence. With the theoretical foundation established, we believe SWMI could be extended to larger clinical testing.

  13. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  14. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  15. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  16. Strong convergence and convergence rates of approximating solutions for algebraic Riccati equations in Hilbert spaces

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1987-01-01

    The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.

  17. Galaxy Rotation and Rapid Supermassive Binary Coalescence

    NASA Astrophysics Data System (ADS)

    Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood

    2015-09-01

    Galaxy mergers usher the supermassive black hole (SMBH) in each galaxy to the center of the potential, where they form an SMBH binary. The binary orbit shrinks by ejecting stars via three-body scattering, but ample work has shown that in spherical galaxy models, the binary separation stalls after ejecting all the stars in its loss cone—this is the well-known final parsec problem. However, it has been shown that SMBH binaries in non-spherical galactic nuclei harden at a nearly constant rate until reaching the gravitational wave regime. Here we use a suite of direct N-body simulations to follow SMBH binary evolution in both corotating and counterrotating flattened galaxy models. For N > 500 K, we find that the evolution of the SMBH binary is convergent and is independent of the particle number. Rotation in general increases the hardening rate of SMBH binaries even more effectively than galaxy geometry alone. SMBH binary hardening rates are similar for co- and counterrotating galaxies. In the corotating case, the center of mass of the SMBH binary settles into an orbit that is in corotation resonance with the background rotating model, and the coalescence time is roughly a few 100 Myr faster than a non-rotating flattened model. We find that counterrotation drives SMBHs to coalesce on a nearly radial orbit promptly after forming a hard binary. We discuss the implications for gravitational wave astronomy, hypervelocity star production, and the effect on the structure of the host galaxy.

  18. GALAXY ROTATION AND RAPID SUPERMASSIVE BINARY COALESCENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holley-Bockelmann, Kelly; Khan, Fazeel Mahmood, E-mail: k.holley@vanderbilt.edu

    2015-09-10

    Galaxy mergers usher the supermassive black hole (SMBH) in each galaxy to the center of the potential, where they form an SMBH binary. The binary orbit shrinks by ejecting stars via three-body scattering, but ample work has shown that in spherical galaxy models, the binary separation stalls after ejecting all the stars in its loss cone—this is the well-known final parsec problem. However, it has been shown that SMBH binaries in non-spherical galactic nuclei harden at a nearly constant rate until reaching the gravitational wave regime. Here we use a suite of direct N-body simulations to follow SMBH binary evolutionmore » in both corotating and counterrotating flattened galaxy models. For N > 500 K, we find that the evolution of the SMBH binary is convergent and is independent of the particle number. Rotation in general increases the hardening rate of SMBH binaries even more effectively than galaxy geometry alone. SMBH binary hardening rates are similar for co- and counterrotating galaxies. In the corotating case, the center of mass of the SMBH binary settles into an orbit that is in corotation resonance with the background rotating model, and the coalescence time is roughly a few 100 Myr faster than a non-rotating flattened model. We find that counterrotation drives SMBHs to coalesce on a nearly radial orbit promptly after forming a hard binary. We discuss the implications for gravitational wave astronomy, hypervelocity star production, and the effect on the structure of the host galaxy.« less

  19. Predicting clinical decline in progressive agrammatic aphasia and apraxia of speech.

    PubMed

    Whitwell, Jennifer L; Weigand, Stephen D; Duffy, Joseph R; Clark, Heather M; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Senjem, Matthew L; Jack, Clifford R; Josephs, Keith A

    2017-11-28

    To determine whether baseline clinical and MRI features predict rate of clinical decline in patients with progressive apraxia of speech (AOS). Thirty-four patients with progressive AOS, with AOS either in isolation or in the presence of agrammatic aphasia, were followed up longitudinally for up to 4 visits, with clinical testing and MRI at each visit. Linear mixed-effects regression models including all visits (n = 94) were used to assess baseline clinical and MRI variables that predict rate of worsening of aphasia, motor speech, parkinsonism, and behavior. Clinical predictors included baseline severity and AOS type. MRI predictors included baseline frontal, premotor, motor, and striatal gray matter volumes. More severe parkinsonism at baseline was associated with faster rate of decline in parkinsonism. Patients with predominant sound distortions (AOS type 1) showed faster rates of decline in aphasia and motor speech, while patients with segmented speech (AOS type 2) showed faster rates of decline in parkinsonism. On MRI, we observed trends for fastest rates of decline in aphasia in patients with relatively small left, but preserved right, Broca area and precentral cortex. Bilateral reductions in lateral premotor cortex were associated with faster rates of decline of behavior. No associations were observed between volumes and decline in motor speech or parkinsonism. Rate of decline of each of the 4 clinical features assessed was associated with different baseline clinical and regional MRI predictors. Our findings could help improve prognostic estimates for these patients. © 2017 American Academy of Neurology.

  20. An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie

    2018-01-01

    In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.

  1. Variation in promiscuity and sexual selection drives avian rate of Faster-Z evolution.

    PubMed

    Wright, Alison E; Harrison, Peter W; Zimmer, Fabian; Montgomery, Stephen H; Pointer, Marie A; Mank, Judith E

    2015-03-01

    Higher rates of coding sequence evolution have been observed on the Z chromosome relative to the autosomes across a wide range of species. However, despite a considerable body of theory, we lack empirical evidence explaining variation in the strength of the Faster-Z Effect. To assess the magnitude and drivers of Faster-Z Evolution, we assembled six de novo transcriptomes, spanning 90 million years of avian evolution. Our analysis combines expression, sequence and polymorphism data with measures of sperm competition and promiscuity. In doing so, we present the first empirical evidence demonstrating the positive relationship between Faster-Z Effect and measures of promiscuity, and therefore variance in male mating success. Our results from multiple lines of evidence indicate that selection is less effective on the Z chromosome, particularly in promiscuous species, and that Faster-Z Evolution in birds is due primarily to genetic drift. Our results reveal the power of mating system and sexual selection in shaping broad patterns in genome evolution. © 2015 John Wiley & Sons Ltd.

  2. Adaptive-gain fast super-twisting sliding mode fault tolerant control for a reusable launch vehicle in reentry phase.

    PubMed

    Zhang, Yao; Tang, Shengjing; Guo, Jie

    2017-11-01

    In this paper, a novel adaptive-gain fast super-twisting (AGFST) sliding mode attitude control synthesis is carried out for a reusable launch vehicle subject to actuator faults and unknown disturbances. According to the fast nonsingular terminal sliding mode surface (FNTSMS) and adaptive-gain fast super-twisting algorithm, an adaptive fault tolerant control law for the attitude stabilization is derived to protect against the actuator faults and unknown uncertainties. Firstly, a second-order nonlinear control-oriented model for the RLV is established by feedback linearization method. And on the basis a fast nonsingular terminal sliding mode (FNTSM) manifold is designed, which provides fast finite-time global convergence and avoids singularity problem as well as chattering phenomenon. Based on the merits of the standard super-twisting (ST) algorithm and fast reaching law with adaption, a novel adaptive-gain fast super-twisting (AGFST) algorithm is proposed for the finite-time fault tolerant attitude control problem of the RLV without any knowledge of the bounds of uncertainties and actuator faults. The important feature of the AGFST algorithm includes non-overestimating the values of the control gains and faster convergence speed than the standard ST algorithm. A formal proof of the finite-time stability of the closed-loop system is derived using the Lyapunov function technique. An estimation of the convergence time and accurate expression of convergence region are also provided. Finally, simulations are presented to illustrate the effectiveness and superiority of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. A description of an ‘obesogenic’ eating style that promotes higher energy intake and is associated with greater adiposity in 4.5 year-old children: Results from the GUSTO cohort

    PubMed Central

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R.; Sadananthan, Suresh Anand; Velan, S. Sendhil; Michael, Navin; Tint, Mya Thway; Fortier, Marielle Valerie; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P.; Meaney, Michael J.; Broekman, Birit F. P.; Lee, Yung Seng; Godfrey, Keith M.; Chong, Mary Foong Fong; Forde, Ciarán G.

    2017-01-01

    Recent findings confirm that faster eating rates support higher energy intakes within a meal and are associated with increased body weight and adiposity in children. The current study sought to identify the eating behaviours that underpin faster eating rates and energy intake in children, and to investigate their variations by weight status and other individual differences. Children (N=386) from the Growing Up in Singapore towards Healthy Outcomes (GUSTO) cohort took part in a video-recorded ad libitum lunch at 4.5 years of age to measure acute energy intake. Videos were coded for three eating behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min) and measures of eating microstructure: eating rate (g/min), total oral exposure (minutes), average bite size (g/bite), chews per gram, oral exposure per bite (seconds), total bites and proportion of active to total mealtime. Children’s BMIs were calculated and a subset of children underwent MRI scanning to establish abdominal adiposity. Children were grouped into faster and slower eaters, and into healthy and overweight groups to compare their eating behaviours. Results demonstrate that faster eating rates were correlated with larger average bite size (r=0.55, p<0.001), fewer chews per gram (r=-0.71, p<0.001) and shorter oral exposure time per bite (r=-0.25, p<0.001), and with higher energy intakes (r=0.61, p<0.001). Children with overweight and higher adiposity had faster eating rates (p<0.01) and higher energy intakes (p<0.01), driven by larger bite sizes (p<0.05). Eating behaviours varied by sex, ethnicity and early feeding regimes, partially attributable to BMI. We propose that these behaviours describe an ‘obesogenic eating style’ that is characterised by faster eating rates, achieved through larger bites, reduced chewing and shorter oral exposure time. This obesogenic eating style supports acute energy intake within a meal and is more prevalent among, though not exclusive to, children with overweight. PMID:28213204

  4. The convergence of health care financing structures: empirical evidence from OECD-countries.

    PubMed

    Leiter, Andrea M; Theurl, Engelbert

    2012-02-01

    The convergence/divergence of health care systems between countries is an interesting facet of the health care system research from a macroeconomic perspective. In this paper, we concentrate on an important dimension of every health care system, namely the convergence/divergence of health care financing (HCF). Based on data from 22 OECD countries in the time period 1970-2005, we use the public financing ratio (public financing in % of total HCF) and per capita public HCF as indicators for convergence. By applying different concepts of convergence, we find that HCF is converging. This conclusion also holds when we look at smaller subgroups of countries and shorter time periods. However, we find evidence that countries do not move towards a common mean and that the rate of convergence is decreasing over time.

  5. Convergence and divergence in a long-term old-field succession: the importance of spatial scale and species abundance.

    PubMed

    Li, Shao-Peng; Cadotte, Marc W; Meiners, Scott J; Pu, Zhichao; Fukami, Tadashi; Jiang, Lin

    2016-09-01

    Whether plant communities in a given region converge towards a particular stable state during succession has long been debated, but rarely tested at a sufficiently long time scale. By analysing a 50-year continuous study of post-agricultural secondary succession in New Jersey, USA, we show that the extent of community convergence varies with the spatial scale and species abundance classes. At the larger field scale, abundance-based dissimilarities among communities decreased over time, indicating convergence of dominant species, whereas incidence-based dissimilarities showed little temporal tend, indicating no sign of convergence. In contrast, plots within each field diverged in both species composition and abundance. Abundance-based successional rates decreased over time, whereas rare species and herbaceous plants showed little change in temporal turnover rates. Initial abandonment conditions only influenced community structure early in succession. Overall, our findings provide strong evidence for scale and abundance dependence of stochastic and deterministic processes over old-field succession. © 2016 John Wiley & Sons Ltd/CNRS.

  6. A linear recurrent kernel online learning algorithm with sparse updates.

    PubMed

    Fan, Haijin; Song, Qing

    2014-02-01

    In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. The Structure of Working Memory Abilities across the Adult Life Span

    PubMed Central

    Hale, Sandra; Rose, Nathan S.; Myerson, Joel; Strube, Michael J; Sommers, Mitchell; Tye-Murray, Nancy; Spehar, Brent

    2010-01-01

    The present study addresses three questions regarding age differences in working memory: (1) whether performance on complex span tasks decreases as a function of age at a faster rate than performance on simple span tasks; (2) whether spatial working memory decreases at a faster rate than verbal working memory; and (3) whether the structure of working memory abilities is different for different age groups. Adults, ages 20–89 (n=388), performed three simple and three complex verbal span tasks and three simple and three complex spatial memory tasks. Performance on the spatial tasks decreased at faster rates as a function of age than performance on the verbal tasks, but within each domain, performance on complex and simple span tasks decreased at the same rates. Confirmatory factor analyses revealed that domain-differentiated models yielded better fits than models involving domain-general constructs, providing further evidence of the need to distinguish verbal and spatial working memory abilities. Regardless of which domain-differentiated model was examined, and despite the faster rates of decrease in the spatial domain, age group comparisons revealed that the factor structure of working memory abilities was highly similar in younger and older adults and showed no evidence of age-related dedifferentiation. PMID:21299306

  8. Convergence analysis of a monotonic penalty method for American option pricing

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yang, Xiaoqi; Teo, Kok Lay

    2008-12-01

    This paper is devoted to study the convergence analysis of a monotonic penalty method for pricing American options. A monotonic penalty method is first proposed to solve the complementarity problem arising from the valuation of American options, which produces a nonlinear degenerated parabolic PDE with Black-Scholes operator. Based on the variational theory, the solvability and convergence properties of this penalty approach are established in a proper infinite dimensional space. Moreover, the convergence rate of the combination of two power penalty functions is obtained.

  9. Role of climate change in reforestation and nursery practices

    Treesearch

    Mary I. Williams; R. Kasten Dumroese

    2014-01-01

    Ecosystems have been adjusting to changes in climate over time, but projections are that future global climate will change at rates faster than that previously experienced in geologic time. It is not necessarily the amount of change, but rather this rate of change that is most threatening to plant species - the climate appears to be changing faster than plants can...

  10. Application of 1 D Finite Element Method in Combination with Laminar Solution Method for Pipe Network Analysis

    NASA Astrophysics Data System (ADS)

    Dudar, O. I.; Dudar, E. S.

    2017-11-01

    The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.

  11. Clockwise rotation of the Brahmaputra Valley relative to India: Tectonic convergence in the eastern Himalaya, Naga Hills, and Shillong Plateau

    NASA Astrophysics Data System (ADS)

    Vernant, P.; Bilham, R.; Szeliga, W.; Drupka, D.; Kalita, S.; Bhattacharyya, A. K.; Gaur, V. K.; Pelgay, P.; Cattin, R.; Berthet, T.

    2014-08-01

    GPS data reveal that the Brahmaputra Valley has broken from the Indian Plate and rotates clockwise relative to India about a point a few hundred kilometers west of the Shillong Plateau. The GPS velocity vectors define two distinct blocks separated by the Kopili fault upon which 2-3 mm/yr of dextral slip is observed: the Shillong block between longitudes 89 and 93°E rotating clockwise at 1.15°/Myr and the Assam block from 93.5°E to 97°E rotating at ≈1.13°/Myr. These two blocks are more than 120 km wide in a north-south sense, but they extend locally a similar distance beneath the Himalaya and Tibet. A result of these rotations is that convergence across the Himalaya east of Sikkim decreases in velocity eastward from 18 to ≈12 mm/yr and convergence between the Shillong Plateau and Bangladesh across the Dauki fault increases from 3 mm/yr in the west to >8 mm/yr in the east. This fast convergence rate is inconsistent with inferred geological uplift rates on the plateau (if a 45°N dip is assumed for the Dauki fault) unless clockwise rotation of the Shillong block has increased substantially in the past 4-8 Myr. Such acceleration is consistent with the reported recent slowing in the convergence rate across the Bhutan Himalaya. The current slip potential near Bhutan, based on present-day convergence rates and assuming no great earthquake since 1713 A.D., is now ~5.4 m, similar to the slip reported from alluvial terraces that offsets across the Main Himalayan Thrust and sufficient to sustain a Mw ≥ 8.0 earthquake in this area.

  12. Acceleration of Lateral Equilibration in Mixed Lipid Bilayers Using Replica Exchange with Solute Tempering

    PubMed Central

    2015-01-01

    The lateral heterogeneity of cellular membranes plays an important role in many biological functions such as signaling and regulating membrane proteins. This heterogeneity can result from preferential interactions between membrane components or interactions with membrane proteins. One major difficulty in molecular dynamics simulations aimed at studying the membrane heterogeneity is that lipids diffuse slowly and collectively in bilayers, and therefore, it is difficult to reach equilibrium in lateral organization in bilayer mixtures. Here, we propose the use of the replica exchange with solute tempering (REST) approach to accelerate lateral relaxation in heterogeneous bilayers. REST is based on the replica exchange method but tempers only the solute, leaving the temperature of the solvent fixed. Since the number of replicas in REST scales approximately only with the degrees of freedom in the solute, REST enables us to enhance the configuration sampling of lipid bilayers with fewer replicas, in comparison with the temperature replica exchange molecular dynamics simulation (T-REMD) where the number of replicas scales with the degrees of freedom of the entire system. We apply the REST method to a cholesterol and 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) bilayer mixture and find that the lateral distribution functions of all molecular pair types converge much faster than in the standard MD simulation. The relative diffusion rate between molecules in REST is, on average, an order of magnitude faster than in the standard MD simulation. Although REST was initially proposed to study protein folding and its efficiency in protein folding is still under debate, we find a unique application of REST to accelerate lateral equilibration in mixed lipid membranes and suggest a promising way to probe membrane lateral heterogeneity through molecular dynamics simulation. PMID:25328493

  13. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  14. Acceleration of Lateral Equilibration in Mixed Lipid Bilayers Using Replica Exchange with Solute Tempering.

    PubMed

    Huang, Kun; García, Angel E

    2014-10-14

    The lateral heterogeneity of cellular membranes plays an important role in many biological functions such as signaling and regulating membrane proteins. This heterogeneity can result from preferential interactions between membrane components or interactions with membrane proteins. One major difficulty in molecular dynamics simulations aimed at studying the membrane heterogeneity is that lipids diffuse slowly and collectively in bilayers, and therefore, it is difficult to reach equilibrium in lateral organization in bilayer mixtures. Here, we propose the use of the replica exchange with solute tempering (REST) approach to accelerate lateral relaxation in heterogeneous bilayers. REST is based on the replica exchange method but tempers only the solute, leaving the temperature of the solvent fixed. Since the number of replicas in REST scales approximately only with the degrees of freedom in the solute, REST enables us to enhance the configuration sampling of lipid bilayers with fewer replicas, in comparison with the temperature replica exchange molecular dynamics simulation (T-REMD) where the number of replicas scales with the degrees of freedom of the entire system. We apply the REST method to a cholesterol and 1,2-dipalmitoyl- sn -glycero-3-phosphocholine (DPPC) bilayer mixture and find that the lateral distribution functions of all molecular pair types converge much faster than in the standard MD simulation. The relative diffusion rate between molecules in REST is, on average, an order of magnitude faster than in the standard MD simulation. Although REST was initially proposed to study protein folding and its efficiency in protein folding is still under debate, we find a unique application of REST to accelerate lateral equilibration in mixed lipid membranes and suggest a promising way to probe membrane lateral heterogeneity through molecular dynamics simulation.

  15. Learning from adaptive neural dynamic surface control of strict-feedback systems.

    PubMed

    Wang, Min; Wang, Cong

    2015-06-01

    Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.

  16. Cretaceous to present kinematics of the Indian, African and Seychelles plates

    NASA Astrophysics Data System (ADS)

    Eagles, Graeme; Hoang, Ha H.

    2014-01-01

    An iterative inverse model of seafloor spreading data from the Mascarene and Madagascar basins and the flanks of the Carlsberg Ridge describes a continuous history of Indian-African Plate divergence since 84 Ma. Visual-fit modelling of conjugate magnetic anomaly data from near the Seychelles platform and Laxmi Ridge documents rapid rotation of a Seychelles Plate about a nearby Euler pole in Palaeocene times. As the Euler pole migrated during this rotation, the Amirante Trench on the western side of the plate accommodated first convergence and later divergence with the African Plate. The unusual present-day morphology of the Amirante Trench and neighbouring Amirante Banks can be related to crustal thickening by thrusting and folding during the convergent phase and the subsequent development of a spreading centre with a median valley during the divergent phase. The model fits FZ trends in the north Arabian and east Somali basins, suggesting that they formed in India-Africa Plate divergence. Seafloor fabric in and between the basins shows that they initially hosted a segmented spreading ridge that accommodated slow plate divergence until 71-69 Ma, and that upon arrival of the Deccan-Réunion plume and an increase to faster plate divergence rates in the period 69-65 Ma, segments of the ridge lengthened and propagated. Ridge propagation into the Indian continental margin led first to the formation of the Laxmi Basin, which accompanied extensive volcanism onshore at the Deccan Traps and offshore at the Saurashtra High and Somnath Ridge. A second propagation episode initiated the ancestral Carlsberg Ridge at which Seychelles-India and India-Africa Plate motions were accommodated. With the completion of this propagation, the plate boundaries in the Mascarene Basin were abandoned. Seafloor spreading between this time and the present has been accommodated solely at the Carlsberg Ridge.

  17. The (B)link Between Creativity and Dopamine: Spontaneous Eye Blink Rates Predict and Dissociate Divergent and Convergent Thinking

    ERIC Educational Resources Information Center

    Chermahini, Soghra Akbari; Hommel, Bernhard

    2010-01-01

    Human creativity has been claimed to rely on the neurotransmitter dopamine, but evidence is still sparse. We studied whether individual performance (N=117) in divergent thinking (alternative uses task) and convergent thinking (remote association task) can be predicted by the individual spontaneous eye blink rate (EBR), a clinical marker of…

  18. Construct validity of ADHD/ODD rating scales: recommendations for the evaluation of forthcoming DSM-V ADHD/ODD scales.

    PubMed

    Burns, G Leonard; Walsh, James A; Servera, Mateu; Lorenzo-Seva, Urbano; Cardo, Esther; Rodríguez-Fornells, Antoni

    2013-01-01

    Exploratory structural equation modeling (SEM) was applied to a multiple indicator (26 individual symptom ratings) by multitrait (ADHD-IN, ADHD-HI and ODD factors) by multiple source (mothers, fathers and teachers) model to test the invariance, convergent and discriminant validity of the Child and Adolescent Disruptive Behavior Inventory with 872 Thai adolescents and the ADHD Rating Scale-IV and ODD scale of the Disruptive Behavior Inventory with 1,749 Spanish children. Most of the individual ADHD/ODD symptoms showed convergent and discriminant validity with the loadings and thresholds being invariant over mothers, fathers and teachers in both samples (the three latent factor means were higher for parents than teachers). The ADHD-IN, ADHD-HI and ODD latent factors demonstrated convergent and discriminant validity between mothers and fathers within the two samples. Convergent and discriminant validity between parents and teachers for the three factors was either absent (Thai sample) or only partial (Spanish sample). The application of exploratory SEM to a multiple indicator by multitrait by multisource model should prove useful for the evaluation of the construct validity of the forthcoming DSM-V ADHD/ODD rating scales.

  19. Convergence Rates of Finite Difference Stochastic Approximation Algorithms

    DTIC Science & Technology

    2016-06-01

    dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It

  20. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    PubMed

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  1. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  2. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  3. Convergence in Reports of Adolescents' Psychopathology: A Focus on Disorganized Attachment and Reflective Functioning.

    PubMed

    Borelli, Jessica L; Palmer, Alexandra; Vanwoerden, Salome; Sharp, Carla

    2017-12-13

    Although convergence in parent-youth reports of adolescent psychopathology is critical for treatment planning, research documents a pervasive lack of agreement in ratings of adolescents' symptoms. Attachment insecurity (particularly disorganized attachment) and impoverished reflective functioning (RF) are 2 theoretically implicated predictors of low convergence that have not been examined in the literature. In a cross-sectional investigation of adolescents receiving inpatient psychiatric treatment, we examined whether disorganized attachment and low (adolescent and parent) RF were associated with patterns of convergence in adolescent internalizing and externalizing symptoms. Compared with organized adolescents, disorganized adolescents had lower parent-youth convergence in reports of their internalizing symptoms and higher convergence in reports of their externalizing symptoms; low adolescent self-focused RF was associated with low convergence in parent-adolescent reports of internalizing symptoms, whereas low adolescent global RF was associated with high convergence in parent-adolescent reports of externalizing symptoms. Among adolescents receiving inpatient psychiatric treatment, disorganized attachment and lower RF were associated with weaker internalizing symptom convergence and greater externalizing symptom convergence, which if replicated, could inform assessment strategies and treatment planning in this setting.

  4. On conforming mixed finite element methods for incompressible viscous flow problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.

    1982-01-01

    The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.

  5. Simultaneous measurement of bacterial flagellar rotation rate and swimming speed.

    PubMed Central

    Magariyama, Y; Sugiyama, S; Muramoto, K; Kawagishi, I; Imae, Y; Kudo, S

    1995-01-01

    Swimming speeds and flagellar rotation rates of individual free-swimming Vibrio alginolyticus cells were measured simultaneously by laser dark-field microscopy at 25, 30, and 35 degrees C. A roughly linear relation between swimming speed and flagellar rotation rate was observed. The ratio of swimming speed to flagellar rotation rate was 0.113 microns, which indicated that a cell progressed by 7% of pitch of flagellar helix during one flagellar rotation. At each temperature, however, swimming speed had a tendency to saturate at high flagellar rotation rate. That is, the cell with a faster-rotating flagellum did not always swim faster. To analyze the bacterial motion, we proposed a model in which the torque characteristics of the flagellar motor were considered. The model could be analytically solved, and it qualitatively explained the experimental results. The discrepancy between the experimental and the calculated ratios of swimming speed to flagellar rotation rate was about 20%. The apparent saturation in swimming speed was considered to be caused by shorter flagella that rotated faster but produced less propelling force. Images FIGURE 1 FIGURE 4 PMID:8580359

  6. Mutual coupling, channel model, and BER for curvilinear antenna arrays

    NASA Astrophysics Data System (ADS)

    Huang, Zhiyong

    This dissertation introduces a wireless communications system with an adaptive beam-former and investigates its performance with different antenna arrays. Mutual coupling, real antenna elements and channel models are included to examine the system performance. In a beamforming system, mutual coupling (MC) among the elements can significantly degrade the system performance. However, MC effects can be compensated if an accurate model of mutual coupling is available. A mutual coupling matrix model is utilized to compensate mutual coupling in the beamforming of a uniform circular array (UCA). Its performance is compared with other models in uplink and downlink beamforming scenarios. In addition, the predictions are compared with measurements and verified with results from full-wave simulations. In order to accurately investigate the minimum mean-square-error (MSE) of an adaptive array in MC, two different noise models, the environmental and the receiver noise, are modeled. The minimum MSEs with and without data domain MC compensation are analytically compared. The influence of mutual coupling on the convergence is also examined. In addition, the weight compensation method is proposed to attain the desired array pattern. Adaptive arrays with different geometries are implemented with the minimum MSE algorithm in the wireless communications system to combat interference at the same frequency. The bit-error-rate (BER) of systems with UCA, uniform rectangular array (URA) and UCA with center element are investigated in additive white Gaussian noise plus well-separated signals or random direction signals scenarios. The output SINR of an adaptive array with multiple interferers is analytically examined. The influence of the adaptive algorithm convergence on the BER is investigated. The UCA is then investigated in a narrowband Rician fading channel. The channel model is built and the space correlations are examined. The influence of the number of signal paths, number of the interferers, Doppler spread and convergence are investigated. The tracking mode is introduced to the adaptive array system, and it further improves the BER. The benefit of using faster data rate (wider bandwidth) is discussed. In order to have better performance in a 3D space, the geometries of uniform spherical array (USAs) are presented and different configurations of USAs are discussed. The LMS algorithm based on temporal a priori information is applied to UCAs and USAs to beamform the patterns. Their performances are compared based on simulation results. Based on the analytical and simulation results, it can be concluded that mutual coupling slightly influences the performance of the adaptive array in communication systems. In addition, arrays with curvilinear geometries perform well in AWGN and fading channels.

  7. Analysis on Inter-Provincial Disparities of China's Rural Education and Convergence Rate: Empirical Analysis on 31 Provinces' (Municipalities') Panel Data from 2001 to 2008

    ERIC Educational Resources Information Center

    Xie, Tongwei

    2011-01-01

    Purpose: This article aims to analyze inter-provincial disparities of rural education and the convergence rate, and to discuss the effects of compulsory education reform after 2001. Design/methodology/approach: The article estimates the rural average education years and education Gini coefficients of China's 31 provinces (municipalities) beside…

  8. Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions

    DTIC Science & Technology

    1977-09-01

    Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to

  9. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  10. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  11. Depressurization and two-phase flow of water containing high levels of dissolved nitrogen gas

    NASA Technical Reports Server (NTRS)

    Simoneau, R. J.

    1981-01-01

    Depressurization of water containing various concentrations of dissolved nitrogen gas was studied. In a nonflow depressurization experiment, water with very high nitrogen content was depressurized at rates from 0.09 to 0.50 MPa per second and a metastable behavior which was a strong function of the depressurization rate was observed. Flow experiments were performed in an axisymmetric, converging diverging nozzle, a two dimensional, converging nozzle with glass sidewalls, and a sharp edge orifice. The converging diverging nozzle exhibited choked flow behavior even at nitrogen concentration levels as low as 4 percent of the saturation level. The flow rates were independent of concentration level. Flow in the two dimensional, converging, visual nozzle appeared to have a sufficient pressure drop at the throat to cause nitrogen to come out of solution, but choking occurred further downstream. The orifice flow motion pictures showed considerable oscillation downstream of the orifice and parallel to the flow. Nitrogen bubbles appeared in the flow at back pressures as high as 3.28 MPa, and the level at which bubbles were no longer visible was a function of nitrogen concentration.

  12. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2011-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.

  13. Simultaneous quaternion estimation (QUEST) and bias determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1989-01-01

    Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.

  14. Dynamic change in mitral regurgitant orifice area: comparison of color Doppler echocardiographic and electromagnetic flowmeter-based methods in a chronic animal model.

    PubMed

    Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J

    1995-08-01

    The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.

  15. Trends and determinants of weight gains among OECD countries: an ecological study.

    PubMed

    Nghiem, S; Vu, X-B; Barnett, A

    2018-06-01

    Obesity has become a global issue with abundant evidence to indicate that the prevalence of obesity in many nations has increased over time. The literature also reports a strong association between obesity and economic development, but the trend that obesity growth rates may converge over time has not been examined. We propose a conceptual framework and conduct an ecological analysis on the relationship between economic development and weight gain. We also test the hypothesis that weight gain converges among countries over time and examine determinants of weight gains. This is a longitudinal study of 34 Organisation for Economic Cooperation and Development (OECD) countries in the years 1980-2008 using publicly available data. We apply a dynamic economic growth model to test the hypothesis that the rate of weight gains across countries may converge over time. We also investigate the determinants of weight gains using a longitudinal regression tree analysis. We do not find evidence that the growth rates of body weight across countries converged for all countries. However, there were groups of countries in which the growth rates of body weight converge, with five groups for males and seven groups for females. The predicted growth rates of body weight peak when gross domestic product (GDP) per capita reaches US$47,000 for males and US$37,000 for females in OECD countries. National levels of consumption of sugar, fat and alcohol were the most important contributors to national weight gains. National weight gains follow an inverse U-shape curve with economic development. Excessive calorie intake is the main contributor to weight gains. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  16. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    NASA Astrophysics Data System (ADS)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  17. A Conjugate Gradient Algorithm with Function Value Information and N-Step Quadratic Convergence for Unconstrained Optimization.

    PubMed

    Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang

    2015-01-01

    It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.

  18. Microgeographic differentiation in thermal performance curves between rural and urban populations of an aquatic insect.

    PubMed

    Tüzün, Nedim; Op de Beeck, Lin; Brans, Kristien I; Janssens, Lizanne; Stoks, Robby

    2017-12-01

    The rapidly increasing rate of urbanization has a major impact on the ecology and evolution of species. While increased temperatures are a key aspect of urbanization ("urban heat islands"), we have very limited knowledge whether this generates differentiation in thermal responses between rural and urban populations. In a common garden experiment, we compared the thermal performance curves (TPCs) for growth rate and mortality in larvae of the damselfly Coenagrion puella from three urban and three rural populations. TPCs for growth rate shifted vertically, consistent with the faster-slower theoretical model whereby the cold-adapted rural larvae grew faster than the warm-adapted urban larvae across temperatures. In line with costs of rapid growth, rural larvae showed lower survival than urban larvae across temperatures. The relatively lower temperatures hence expected shorter growing seasons in rural populations compared to the populations in the urban heat islands likely impose stronger time constraints to reach a certain developmental stage before winter, thereby selecting for faster growth rates. In addition, higher predation rates at higher temperature may have contributed to the growth rate differences between urban and rural ponds. A faster-slower differentiation in TPCs may be a widespread pattern along the urbanization gradient. The observed microgeographic differentiation in TPCs supports the view that urbanization may drive life-history evolution. Moreover, because of the urban heat island effect, urban environments have the potential to aid in developing predictions on the impact of climate change on rural populations.

  19. Relative Motion of the Nazca (farallon) and South American Plates Since Late Cretaceous Time

    NASA Astrophysics Data System (ADS)

    Pardo-Casas, Federico; Molnar, Peter

    1987-06-01

    By combining reconstructions of the South American and African plates, the African and Antarctic plates, the Antarctic and Pacific plates, and the Pacific and Nazca plates, we calculated the relative positions and history of convergence of the Nazca and South American plates. Despite variations in convergence rates along the Andes, periods of rapid convergence (averaging more than 100 mm/a) between the times of anomalies 21 (49.5 Ma) and 18 (42 Ma) and since anomaly 7 (26 Ma) coincide with two phases of relatively intense tectonic activity in the Peruvian Andes, known as the late Eocene Incaic and Mio-Pliocene Quechua phases. The periods of relatively slow convergence (50 to 55 ± 30 mm/a at the latitude of Peru and less farther south) between the times of anomalies 30-31 (68.5 Ma) and 21 and between those of anomalies 13 (36 Ma) and 7 correlate with periods during which tectonic activity was relatively quiescent. Thus these reconstructions provide quantitative evidence for a correlation of the intensity of tectonic activity in the overriding plate at subduction zones with variations in the convergence rate.

  20. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  1. A Wireless Electronic Nose System Using a Fe2O3 Gas Sensing Array and Least Squares Support Vector Regression

    PubMed Central

    Song, Kai; Wang, Qi; Liu, Qi; Zhang, Hongquan; Cheng, Yingguo

    2011-01-01

    This paper describes the design and implementation of a wireless electronic nose (WEN) system which can online detect the combustible gases methane and hydrogen (CH4/H2) and estimate their concentrations, either singly or in mixtures. The system is composed of two wireless sensor nodes—a slave node and a master node. The former comprises a Fe2O3 gas sensing array for the combustible gas detection, a digital signal processor (DSP) system for real-time sampling and processing the sensor array data and a wireless transceiver unit (WTU) by which the detection results can be transmitted to the master node connected with a computer. A type of Fe2O3 gas sensor insensitive to humidity is developed for resistance to environmental influences. A threshold-based least square support vector regression (LS-SVR)estimator is implemented on a DSP for classification and concentration measurements. Experimental results confirm that LS-SVR produces higher accuracy compared with artificial neural networks (ANNs) and a faster convergence rate than the standard support vector regression (SVR). The designed WEN system effectively achieves gas mixture analysis in a real-time process. PMID:22346587

  2. Incorrect predictions reduce switch costs.

    PubMed

    Kleinsorge, Thomas; Scheil, Juliane

    2015-07-01

    In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Study of consensus-based time synchronization in wireless sensor networks.

    PubMed

    He, Jianping; Li, Hao; Chen, Jiming; Cheng, Peng

    2014-03-01

    Recently, various consensus-based protocols have been developed for time synchronization in wireless sensor networks. However, due to the uncertainties lying in both the hardware fabrication and network communication processes, it is not clear how most of the protocols will perform in real implementations. In order to reduce such gap, this paper investigates whether and how the typical consensus-based time synchronization protocols can tolerate the uncertainties in practical sensor networks through extensive testbed experiments. For two typical protocols, i.e., Average Time Synchronization (ATS) and Maximum Time Synchronization (MTS), we first analyze how the time synchronization accuracy will be affected by various uncertainties in the system. Then, we implement both protocols on our sensor network testbed consisted of Micaz nodes, and investigate the time synchronization performance and robustness under various network settings. Noticing that the synchronized clocks under MTS may be slightly faster than the desirable clock, by adopting both maximum consensus and minimum consensus, we propose a modified protocol, MMTS, which is able to drive the synchronized clocks closer to the desirable clock while maintaining the convergence rate and synchronization accuracy of MTS. © 2013 ISA. Published by ISA. All rights reserved.

  4. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    PubMed Central

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  5. 16QAM Blind Equalization via Maximum Entropy Density Approximation Technique and Nonlinear Lagrange Multipliers

    PubMed Central

    Mauda, R.; Pinchas, M.

    2014-01-01

    Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813

  6. Biomechanics Simulations Using Cubic Hermite Meshes with Extraordinary Nodes for Isogeometric Cardiac Modeling

    PubMed Central

    Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.

    2016-01-01

    Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096

  7. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  8. Application of a trigonometric finite difference procedure to numerical analysis of compressive and shear buckling of orthotropic panels

    NASA Technical Reports Server (NTRS)

    Stein, M.; Housner, J. D.

    1978-01-01

    A numerical analysis developed for the buckling of rectangular orthotropic layered panels under combined shear and compression is described. This analysis uses a central finite difference procedure based on trigonometric functions instead of using the conventional finite differences which are based on polynomial functions. Inasmuch as the buckle mode shape is usually trigonometric in nature, the analysis using trigonometric finite differences can be made to exhibit a much faster convergence rate than that using conventional differences. Also, the trigonometric finite difference procedure leads to difference equations having the same form as conventional finite differences; thereby allowing available conventional finite difference formulations to be converted readily to trigonometric form. For two-dimensional problems, the procedure introduces two numerical parameters into the analysis. Engineering approaches for the selection of these parameters are presented and the analysis procedure is demonstrated by application to several isotropic and orthotropic panel buckling problems. Among these problems is the shear buckling of stiffened isotropic and filamentary composite panels in which the stiffener is broken. Results indicate that a break may degrade the effect of the stiffener to the extent that the panel will not carry much more load than if the stiffener were absent.

  9. Solving complex photocycle kinetics. Theory and direct method.

    PubMed Central

    Nagle, J F

    1991-01-01

    A direct nonlinear least squares method is described that obtains the true kinetic rate constants and the temperature-independent spectra of n intermediates from spectroscopic data taken in the visible at three or more temperatures. A theoretical analysis, which is independent of implementation of the direct method, proves that well determined local solutions are not possible for fewer than three temperatures. This analysis also proves that measurements at more than n wavelengths are redundant, although the direct method indicates that convergence is faster if n + m wavelengths are measured, where m is of order one. This suggests that measurements should concentrate on high precision for a few measuring wavelengths, rather than lower precision for many wavelengths. Globally, false solutions occur, and the ability to reject these depends upon the precision of the data, as shown by explicit example. An optimized way to analyze vibrational spectroscopic data is also presented. Such data yield unique results, which are comparably accurate to those obtained from data taken in the visible with comparable noise. It is discussed how use of both kinds of data is advantageous if the data taken in the visible are significantly less noisy. PMID:2009362

  10. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  11. FIR digital filter-based ZCDPLL for carrier recovery

    NASA Astrophysics Data System (ADS)

    Nasir, Qassim

    2016-04-01

    The objective of this work is to analyse the performance of the newly proposed two-tap FIR digital filter-based first-order zero-crossing digital phase-locked loop (ZCDPLL) in the absence or presence of additive white Gaussian noise (AWGN). The introduction of the two-tap FIR digital filter widens the lock range of a ZCDPLL and improves the loop's operation in the presence of AWGN. The FIR digital filter tap coefficients affect the loop convergence behaviour and appropriate selection of those gains should be taken into consideration. The new proposed loop has wider locking range and faster acquisition time and reduces the phase error variations in the presence of noise.

  12. pySecDec: A toolbox for the numerical evaluation of multi-scale integrals

    NASA Astrophysics Data System (ADS)

    Borowka, S.; Heinrich, G.; Jahn, S.; Jones, S. P.; Kerner, M.; Schlenk, J.; Zirke, T.

    2018-01-01

    We present pySECDEC, a new version of the program SECDEC, which performs the factorization of dimensionally regulated poles in parametric integrals, and the subsequent numerical evaluation of the finite coefficients. The algebraic part of the program is now written in the form of python modules, which allow a very flexible usage. The optimization of the C++ code, generated using FORM, is improved, leading to a faster numerical convergence. The new version also creates a library of the integrand functions, such that it can be linked to user-specific codes for the evaluation of matrix elements in a way similar to analytic integral libraries.

  13. An Effective Hybrid Firefly Algorithm with Harmony Search for Global Numerical Optimization

    PubMed Central

    Guo, Lihong; Wang, Gai-Ge; Wang, Heqi; Wang, Dinan

    2013-01-01

    A hybrid metaheuristic approach by hybridizing harmony search (HS) and firefly algorithm (FA), namely, HS/FA, is proposed to solve function optimization. In HS/FA, the exploration of HS and the exploitation of FA are fully exerted, so HS/FA has a faster convergence speed than HS and FA. Also, top fireflies scheme is introduced to reduce running time, and HS is utilized to mutate between fireflies when updating fireflies. The HS/FA method is verified by various benchmarks. From the experiments, the implementation of HS/FA is better than the standard FA and other eight optimization methods. PMID:24348137

  14. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  15. Nonlinear dynamics optimization with particle swarm and genetic algorithms for SPEAR3 emittance upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xiaobiao; Safranek, James

    2014-09-01

    Nonlinear dynamics optimization is carried out for a low emittance upgrade lattice of SPEAR3 in order to improve its dynamic aperture and Touschek lifetime. Two multi-objective optimization algorithms, a genetic algorithm and a particle swarm algorithm, are used for this study. The performance of the two algorithms are compared. The result shows that the particle swarm algorithm converges significantly faster to similar or better solutions than the genetic algorithm and it does not require seeding of good solutions in the initial population. These advantages of the particle swarm algorithm may make it more suitable for many accelerator optimization applications.

  16. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  17. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    PubMed Central

    2010-01-01

    Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855

  18. Men's Increase in College Enrollment Breaks Long-Term Trend--Rises Faster than Rate for Women. Fact Book Bulletin

    ERIC Educational Resources Information Center

    Southern Regional Education Board (SREB), 2012

    2012-01-01

    From 2005 to 2010, in an historic turnaround, the number of men enrolled in college increased faster than the number of women in the Southern Regional Education Board (SREB) region, the West and the Northeast. The Midwest was the only region where enrollment by women increased faster than for men over this period. In the SREB region, men's…

  19. Adaptive Inverse Control for Rotorcraft Vibration Reduction

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1985-01-01

    This thesis extends the Least Mean Square (LMS) algorithm to solve the mult!ple-input, multiple-output problem of alleviating N/Rev (revolutions per minute by number of blades) helicopter fuselage vibration by means of adaptive inverse control. A frequency domain locally linear model is used to represent the transfer matrix relating the higher harmonic pitch control inputs to the harmonic vibration outputs to be controlled. By using the inverse matrix as the controller gain matrix, an adaptive inverse regulator is formed to alleviate the N/Rev vibration. The stability and rate of convergence properties of the extended LMS algorithm are discussed. It is shown that the stability ranges for the elements of the stability gain matrix are directly related to the eigenvalues of the vibration signal information matrix for the learning phase, but not for the control phase. The overall conclusion is that the LMS adaptive inverse control method can form a robust vibration control system, but will require some tuning of the input sensor gains, the stability gain matrix, and the amount of control relaxation to be used. The learning curve of the controller during the learning phase is shown to be quantitatively close to that predicted by averaging the learning curves of the normal modes. For higher order transfer matrices, a rough estimate of the inverse is needed to start the algorithm efficiently. The simulation results indicate that the factor which most influences LMS adaptive inverse control is the product of the control relaxation and the the stability gain matrix. A small stability gain matrix makes the controller less sensitive to relaxation selection, and permits faster and more stable vibration reduction, than by choosing the stability gain matrix large and the control relaxation term small. It is shown that the best selections of the stability gain matrix elements and the amount of control relaxation is basically a compromise between slow, stable convergence and fast convergence with increased possibility of unstable identification. In the simulation studies, the LMS adaptive inverse control algorithm is shown to be capable of adapting the inverse (controller) matrix to track changes in the flight conditions. The algorithm converges quickly for moderate disturbances, while taking longer for larger disturbances. Perfect knowledge of the inverse matrix is not required for good control of the N/Rev vibration. However it is shown that measurement noise will prevent the LMS adaptive inverse control technique from controlling the vibration, unless the signal averaging method presented is incorporated into the algorithm.

  20. Fast algorithms for chiral fermions in 2 dimensions

    NASA Astrophysics Data System (ADS)

    Hyka (Xhako), Dafina; Osmanaj (Zeqirllari), Rudina

    2018-03-01

    In lattice QCD simulations the formulation of the theory in lattice should be chiral in order that symmetry breaking happens dynamically from interactions. In order to guarantee this symmetry on the lattice one uses overlap and domain wall fermions. On the other hand high computational cost of lattice QCD simulations with overlap or domain wall fermions remains a major obstacle of research in the field of elementary particles. We have developed the preconditioned GMRESR algorithm as fast inverting algorithm for chiral fermions in U(1) lattice gauge theory. In this algorithm we used the geometric multigrid idea along the extra dimension.The main result of this work is that the preconditioned GMRESR is capable to accelerate the convergence 2 to 12 times faster than the other optimal algorithms (SHUMR) for different coupling constant and lattice 32x32. Also, in this paper we tested it for larger lattice size 64x64. From the results of simulations we can see that our algorithm is faster than SHUMR. This is a very promising result that this algorithm can be adapted also in 4 dimension.

  1. Use of the preconditioned conjugate gradient algorithm as a generic solver for mixed-model equations in animal breeding applications.

    PubMed

    Tsuruta, S; Misztal, I; Strandén, I

    2001-05-01

    Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.

  2. A general solution strategy of modified power method for higher mode solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    2016-01-15

    A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less

  3. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  4. Human-in-the-loop Bayesian optimization of wearable device parameters

    PubMed Central

    Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott

    2017-01-01

    The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613

  5. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  6. By-passing the sign-problem in Fermion Path Integral Monte Carlo simulations by use of high-order propagators

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2014-03-01

    The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.

  7. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    PubMed

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  8. A comparison of PCA/ICA for data preprocessing in remote sensing imagery classification

    NASA Astrophysics Data System (ADS)

    He, Hui; Yu, Xianchuan

    2005-10-01

    In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.

  9. Association of female sex and heart rate with increased arterial stiffness in patients with type 2 diabetes mellitus

    PubMed Central

    Kang, Min-Kyung; Yu, Jae Myung; Chun, Kwang Jin; Choi, Jaehuk; Choi, Seonghoon; Lee, Namho; Cho, Jung Rae

    2017-01-01

    Objective: This study aimed to evaluate the factors associated with increased arterial stiffness (IAS) measured by pulse wave velocity (PWV) and its clinical implications in patients with type 2 diabetes mellitus (DM). Methods: This was an observational, cross-sectional study. The ankle–brachial PWV was used to measure arterial stiffness, and 310 patients (mean age, 49±9 years; 180 men) with type 2 DM were divided into two groups according to the results of PWV: Group 1 (IAS; n=214) and Group 2 (normal arterial stiffness; n=96). Results: Patients in Group 1 were predominantly females (48% vs. 28%, p=0.001) and showed higher blood pressure and faster heart rate (HR). The glomerular filtration rate was lower and the urine microalbumin level was higher in patients with IAS. In multiple regression analysis, female sex and faster HR were independently associated with IAS. In subgroup analysis among female patients, prior stroke was more common in patients with IAS, and faster HR and increased postprandial 2-h C-peptide level were independently associated with IAS. Conclusion: Female sex and faster HR were independently associated with IAS in patients with type 2 DM. In a subgroup analysis among female patients, prior stroke was more common in patients with IAS, and faster HR and elevated postprandial 2-h C-peptide level were found to be independently associated with IAS. PMID:29145217

  10. Rates and mechanics of rapid frontal accretion along the very obliquely convergent southern Hikurangi margin, New Zealand

    NASA Astrophysics Data System (ADS)

    Barnes, Philip M.; de Lépinay, Bernard Mercier

    1997-11-01

    Analysis of seismic reflection profiles, swath bathymetry, side-scan sonar imagery, and sediment samples reveal the three-dimensional structure, morphology, and stratigraphic evolution of the central to southern Hikurangi margin accretionary wedge, which is developing in response to thick trench fill sediment and oblique convergence between the Australian and Pacific plates. A seismic stratigraphy of the trench fill turbidites and frontal part of the wedge is constrained by seismic correlations to an already established stratigraphic succession nearby, by coccolith and foraminifera biostratigraphy of three core and dredge samples, and by estimates of stratigraphic thicknesses and rates of accumulation of compacted sediment. Structural and stratigraphic analyses of the frontal part of the wedge yield quantitative data on the timing of inception of thrust faults and folds, on the growth and mechanics of frontal accretion under variable convergence obliquity, and on the amounts and rates of horizontal shortening. The data place constraints on the partitioning of geological strain across the entire southern Hikurangi margin. The principal deformation front at the toe of the wedge is discontinuous and represented by right-stepping thrust faulted and folded ridges up to 1 km high, which develop initially from discontinuous protothrusts. In the central part of the margin near 41°S, where the convergence obliquity is 50°, orthogonal convergence rate is slow (27 mm/yr), and about 75% of the total 4 km of sediment on the Pacific Plate is accreted frontally, the seismically resolvable structures within 30 km of the deformation front accommodate about 6 km of horizontal shortening. At least 80% of this shortening has occurred within the last 0.4±0.1 m.y. at an average rate of 12±3 mm/yr. This rate indicates that the frontal 30 km of the wedge accounts for about 33-55% of the predicted orthogonal contraction across the entire plate boundary zone. Despite plate convergence obliquity of 50°, rapid frontal accretion has occurred during the late Quaternary with the principal deformation front migrating seaward up to 50 km within the last 0.5 m.y. (i.e., at a rate of 100 km/m.y.). The structural response to this accretion rate has been a reduction in wedge taper and, consequently, internal deformation behind the present deformation front. Near the southwestern termination of the wedge, where there is an along-the-margin transition to continental transpressional tectonics, the convergence obliquity increases to >56°, and the orthogonal convergence rate decreases to 22 mm/yr, the wedge narrows to 13 km and is characterized simply by two frontal backthrusts and landward-verging folds. These structures have accommodated not more than 0.5 km of horizontal shortening at a rate of < 1 mm/yr, which represents < 5% of the predicted orthogonal shortening across the entire plate boundary in southern North Island. The landward-vergent structural domain may represent a transition zone from rapid frontal accretion associated with low basal friction and high pore pressure ratio in the central part of the margin, to the northern South Island region where the upper and lower plates are locked or at least very strongly coupled.

  11. Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1987-01-01

    An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.

  12. An improved VSS NLMS algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  13. Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation

    NASA Astrophysics Data System (ADS)

    Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.

    2010-02-01

    Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.

  14. Multi-element microelectropolishing method

    DOEpatents

    Lee, Peter J.

    1994-01-01

    A method is provided for microelectropolishing a transmission electron microscopy nonhomogeneous multi-element compound foil. The foil is electrolyzed at different polishing rates for different elements by rapidly cycling between different current densities. During a first portion of each cycle at a first voltage a first element electrolyzes at a higher current density than a second element such that the material of the first element leaves the anode foil at a faster rate than the second element and creates a solid surface film, and such that the solid surface film is removed at a faster rate than the first element leaves the anode foil. During a second portion of each cycle at a second voltage the second element electrolyzes at a higher current density than the first element, and the material of the second element leaves the anode foil at a faster rate than the first element and creates a solid surface film, and the solid surface film is removed at a slower rate than the second element leaves the foil. The solid surface film is built up during the second portion of the cycle, and removed during the first portion of the cycle.

  15. On the Convergence of an Implicitly Restarted Arnoldi Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, Richard B.

    We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.

  16. Movement amplitude and tempo change in piano performance

    NASA Astrophysics Data System (ADS)

    Palmer, Caroline

    2004-05-01

    Music performance places stringent temporal and cognitive demands on individuals that should yield large speed/accuracy tradeoffs. Skilled piano performance, however, shows consistently high accuracy across a wide variety of rates. Movement amplitude may affect the speed/accuracy tradeoff, so that high accuracy can be obtained even at very fast tempi. The contribution of movement amplitude changes in rate (tempo) is investigated with motion capture. Cameras recorded pianists with passive markers on hands and fingers, who performed on an electronic (MIDI) keyboard. Pianists performed short melodies at faster and faster tempi until they made errors (altering the speed/accuracy function). Variability of finger movements in the three motion planes indicated most change in the plane perpendicular to the keyboard across tempi. Surprisingly, peak amplitudes of motion before striking the keys increased as tempo increased. Increased movement amplitudes at faster rates may reduce or compensate for speed/accuracy tradeoffs. [Work supported by Canada Research Chairs program, HIMH R01 45764.

  17. Growth rate effects on the formation of dislocation loops around deep helium bubbles in Tungsten

    DOE PAGES

    Sandoval, Luis; Perez, Danny; Uberuaga, Blas P.; ...

    2016-11-15

    Here, the growth process of spherical helium bubbles located 6 nm below a (100) surface is studied using molecular dynamics and parallel replica dynamics simulations, over growth rates from 10 6 to 10 12 helium atoms per second. Slower growth rates lead to a release of pressure and lower helium content as compared with fast growth cases. In addition, at slower growth rates, helium bubbles are not decorated by multiple dislocation loops, as these tend to merge or emit given sufficient time. At faster rates, dislocation loops nucleate faster than they can emit, leading to a more complicated dislocation structuremore » around the bubble.« less

  18. Fourier analysis of the SOR iteration

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.; Trefethen, L. N.

    1986-01-01

    The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.

  19. Muscle fibre recruitment can respond to the mechanics of the muscle contraction.

    PubMed

    Wakeling, James M; Uehli, Katrin; Rozitis, Antra I

    2006-08-22

    This study investigates the motor unit recruitment patterns between and within muscles of the triceps surae during cycling on a stationary ergometer at a range of pedal speeds and resistances. Muscle activity was measured from the soleus (SOL), medial gastrocnemius (MG) and lateral gastrocnemius (LG) using surface electromyography (EMG) and quantified using wavelet and principal component analysis. Muscle fascicle strain rates were quantified using ultrasonography, and the muscle-tendon unit lengths were calculated from the segmental kinematics. The EMG intensities showed that the body uses the SOL relatively more for the higher-force, lower-velocity contractions than the MG and LG. The EMG spectra showed a shift to higher frequencies at faster muscle fascicle strain rates for MG: these shifts were independent of the level of muscle activity, the locomotor load and the muscle fascicle strain. These results indicated that a selective recruitment of the faster motor units occurred within the MG muscle in response to the increasing muscle fascicle strain rates. This preferential recruitment of the faster fibres for the faster tasks indicates that in some circumstances motor unit recruitment during locomotion can match the contractile properties of the muscle fibres to the mechanical demands of the contraction.

  20. Validity of faculty and resident global assessment of medical students' clinical knowledge during their pediatrics clerkship.

    PubMed

    Dudas, Robert A; Colbert, Jorie M; Goldstein, Seth; Barone, Michael A

    2012-01-01

    Medical knowledge is one of six core competencies in medicine. Medical student assessments should be valid and reliable. We assessed the relationship between faculty and resident global assessment of pediatric medical student knowledge and performance on a standardized test in medical knowledge. Retrospective cross-sectional study of medical students on a pediatric clerkship in academic year 2008-2009 at one academic health center. Faculty and residents rated students' clinical knowledge on a 5-point Likert scale. The inter-rater reliability of clinical knowledge ratings was assessed by calculating the intra-class correlation coefficient (ICC) for residents' ratings, faculty ratings, and both rating types combined. Convergent validity between clinical knowledge ratings and scores on the National Board of Medical Examiners (NBME) clinical subject examination in pediatrics was assessed with Pearson product moment correlation correction and the coefficient of the determination. There was moderate agreement for global clinical knowledge ratings by faculty and moderate agreement for ratings by residents. The agreement was also moderate when faculty and resident ratings were combined. Global ratings of clinical knowledge had high convergent validity with pediatric examination scores when students were rated by both residents and faculty. Our findings provide evidence for convergent validity of global assessment of medical students' clinical knowledge with NBME subject examination scores in pediatrics. Copyright © 2012 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  1. Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.

    2009-01-01

    The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.

  2. Flow cytometry apparatus

    DOEpatents

    Pinkel, D.

    1987-11-30

    An obstruction across the flow chamber creates a one-dimensional convergence of a sheath fluid. A passageway in the obstruction directs flat cells near to the area of one-dimensional convergence in the sheath fluid to provide proper orientation of flat cells at fast rates. 6 figs.

  3. Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Movriplis, Dimitri J.

    1998-01-01

    Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.

  4. Nubia-Arabia-Eurasia plate motions and the dynamics of Mediterranean and Middle East tectonics

    NASA Astrophysics Data System (ADS)

    Reilinger, Robert; McClusky, Simon

    2011-09-01

    We use geodetic and plate tectonic observations to constrain the tectonic evolution of the Nubia-Arabia-Eurasia plate system. Two phases of slowing of Nubia-Eurasia convergence, each of which resulted in an ˜50 per cent decrease in the rate of convergence, coincided with the initiation of Nubia-Arabia continental rifting along the Red Sea and Somalia-Arabia rifting along the Gulf of Aden at 24 ± 4 Ma, and the initiation of oceanic rifting along the full extent of the Gulf of Aden at 11 ± 2 Ma. In addition, both the northern and southern Red Sea (Nubia-Arabia plate boundary) underwent changes in the configuration of extension at 11 ± 2 Ma, including the transfer of extension from the Suez Rift to the Gulf of Aqaba/Dead Sea fault system in the north, and from the central Red Sea Basin (Bab al Mandab) to the Afar volcanic zone in the south. While Nubia-Eurasia convergence slowed, the rate of Arabia-Eurasia convergence remained constant within the resolution of our observations, and is indistinguishable from the present-day global positioning system rate. The timing of the initial slowing of Nubia-Eurasia convergence (24 ± 4 Ma) corresponds to the initiation of extensional tectonics in the Mediterranean Basin, and the second phase of slowing to changes in the character of Mediterranean extension reported at ˜11 Ma. These observations are consistent with the hypothesis that changes in Nubia-Eurasia convergence, and associated Nubia-Arabia divergence, are the fundamental cause of both Mediterranean and Middle East post-Late Oligocene tectonics. We speculate about the implications of these kinematic relationships for the dynamics of Nubia-Arabia-Eurasia plate interactions, and favour the interpretation that slowing of Nubia-Eurasia convergence, and the resulting tectonic changes in the Mediterranean Basin and Middle East, resulted from a decrease in slab pull from the Arabia-subducted lithosphere across the Nubia-Arabia, evolving plate boundary.

  5. Quaternary tectonic evolution of the Pamir-Tian Shan convergence zone, Northwest China

    NASA Astrophysics Data System (ADS)

    Thompson Jobe, Jessica Ann; Li, Tao; Chen, Jie; Burbank, Douglas W.; Bufe, Aaron

    2017-12-01

    The Pamir-Tian Shan collision zone in the western Tarim Basin, northwest China, formed from rapid and ongoing convergence in response to the Indo-Eurasian collision. The arid landscape preserves suites of fluvial terraces crossing structures active since the late Neogene that create fault and fold scarps recording Quaternary deformation. Using geologic and geomorphic mapping, differential GPS surveys of deformed terraces, and optically stimulated luminescence dating, we create a synthesis of the active structures that delineate the timing, rate, and migration of Quaternary deformation during ongoing convergence. New deformation rates on eight faults and folds, when combined with previous studies, highlight the spatial and temporal patterns of deformation within the Pamir-Tian Shan convergence zone during the Quaternary. Terraces spanning 130 to 8 ka record deformation rates between 0.1 and 5.6 mm/yr on individual structures. In the westernmost Tarim Basin, where the Pamir and Tian Shan are already juxtaposed, the fastest rates occur on actively deforming structures at the interface of the Pamir-Tian Shan orogens. Farther east, as the separation between the Pamir-Tian Shan orogens increases, the deformation has not been concentrated on a single structure, but rather has been concurrently distributed across a zone of faults and folds in the Kashi-Atushi fold-and-thrust belt and along the NE Pamir margin, where shortening rates vary on individual structures during the Quaternary. Although numerous structures accommodate the shortening and the locus of deformation shifts during the Quaternary, the total shortening across the western Tarim Basin has remained steady and approximately matches the current geodetic rate of 6-9 mm/yr.

  6. Strain accumulation across the Prince William Sound asperity, Southcentral Alaska

    NASA Astrophysics Data System (ADS)

    Savage, J. C.; Svarc, J. L.; Lisowski, M.

    2015-03-01

    The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.

  7. Strain accumulation across the Prince William Sound asperity, Southcentral Alaska

    USGS Publications Warehouse

    Savage, James C.; Svarc, Jerry L.; Lisowski, Michael

    2015-01-01

    The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.

  8. Convergence behavior of the random phase approximation renormalized correlation energy

    NASA Astrophysics Data System (ADS)

    Bates, Jefferson E.; Sensenig, Jonathon; Ruzsinszky, Adrienn

    2017-05-01

    Based on the random phase approximation (RPA), RPA renormalization [J. E. Bates and F. Furche, J. Chem. Phys. 139, 171103 (2013), 10.1063/1.4827254] is a robust many-body perturbation theory that works for molecules and materials because it does not diverge as the Kohn-Sham gap approaches zero. Additionally, RPA renormalization enables the simultaneous calculation of RPA and beyond-RPA correlation energies since the total correlation energy is the sum of a series of independent contributions. The first-order approximation (RPAr1) yields the dominant beyond-RPA contribution to the correlation energy for a given exchange-correlation kernel, but systematically underestimates the total beyond-RPA correction. For both the homogeneous electron gas model and real systems, we demonstrate numerically that RPA renormalization beyond first order converges monotonically to the infinite-order beyond-RPA correlation energy for several model exchange-correlation kernels and that the rate of convergence is principally determined by the choice of the kernel and spin polarization of the ground state. The monotonic convergence is rationalized from an analysis of the RPA renormalized correlation energy corrections, assuming the exchange-correlation kernel and response functions satisfy some reasonable conditions. For spin-unpolarized atoms, molecules, and bulk solids, we find that RPA renormalization is typically converged to 1 meV error or less by fourth order regardless of the band gap or dimensionality. Most spin-polarized systems converge at a slightly slower rate, with errors on the order of 10 meV at fourth order and typically requiring up to sixth order to reach 1 meV error or less. Slowest to converge, however, open-shell atoms present the most challenging case and require many higher orders to converge.

  9. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  10. Non-LTE radiative transfer with lambda-acceleration - Convergence properties using exact full and diagonal lambda-operators

    NASA Technical Reports Server (NTRS)

    Macfarlane, J. J.

    1992-01-01

    We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.

  11. Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method

    NASA Technical Reports Server (NTRS)

    Whitaker, David L.

    1993-01-01

    A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.

  12. Distributed support vector machine in master-slave mode.

    PubMed

    Chen, Qingguo; Cao, Feilong

    2018-05-01

    It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Convergence among cave catfishes: long-branch attraction and a Bayesian relative rates test.

    PubMed

    Wilcox, T P; García de León, F J; Hendrickson, D A; Hillis, D M

    2004-06-01

    Convergence has long been of interest to evolutionary biologists. Cave organisms appear to be ideal candidates for studying convergence in morphological, physiological, and developmental traits. Here we report apparent convergence in two cave-catfishes that were described on morphological grounds as congeners: Prietella phreatophila and Prietella lundbergi. We collected mitochondrial DNA sequence data from 10 species of catfishes, representing five of the seven genera in Ictaluridae, as well as seven species from a broad range of siluriform outgroups. Analysis of the sequence data under parsimony supports a monophyletic Prietella. However, both maximum-likelihood and Bayesian analyses support polyphyly of the genus, with P. lundbergi sister to Ictalurus and P. phreatophila sister to Ameiurus. The topological difference between parsimony and the other methods appears to result from long-branch attraction between the Prietella species. Similarly, the sequence data do not support several other relationships within Ictaluridae supported by morphology. We develop a new Bayesian method for examining variation in molecular rates of evolution across a phylogeny.

  14. (99)Tc(VII) Retardation, Reduction, and Redox Rate Scaling in Naturally Reduced Sediments.

    PubMed

    Liu, Yuanyuan; Liu, Chongxuan; Kukkadapu, Ravi K; McKinley, James P; Zachara, John; Plymale, Andrew E; Miller, Micah D; Varga, Tamas; Resch, Charles T

    2015-11-17

    An experimental and modeling study was conducted to investigate pertechnetate (Tc(VII)O4(-)) retardation, reduction, and rate scaling in three sediments from Ringold formation at U.S. Department of Energy's Hanford site, where (99)Tc is a major contaminant in groundwater. Tc(VII) was reduced in all the sediments in both batch reactors and diffusion columns, with a faster rate in a sediment containing a higher concentration of HCl-extractable Fe(II). Tc(VII) migration in the diffusion columns was reductively retarded with retardation degrees correlated with Tc(VII) reduction rates. The reduction rates were faster in the diffusion columns than those in the batch reactors, apparently influenced by the spatial distribution of redox-reactive minerals along transport paths that supplied Tc(VII). X-ray computed tomography and autoradiography were performed to identify the spatial locations of Tc(VII) reduction and transport paths in the sediments, and results generally confirmed the newly found behavior of reaction rate changes from batch to column. The results from this study implied that Tc(VII) migration can be reductively retarded at Hanford site with a retardation degree dependent on reactive Fe(II) content and its distribution in sediments. This study also demonstrated that an effective reaction rate may be faster in transport systems than that in well-mixed reactors.

  15. Active Control of Wind Tunnel Noise

    NASA Technical Reports Server (NTRS)

    Hollis, Patrick (Principal Investigator)

    1991-01-01

    The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.

  16. Initial-value semiclassical propagators for the Wigner phase space representation: Formulation based on the interpretation of the Moyal equation as a Schrödinger equation.

    PubMed

    Koda, Shin-ichi

    2015-12-28

    We formulate various semiclassical propagators for the Wigner phase space representation from a unified point of view. As is shown in several studies, the Moyal equation, which is an equation of motion for the Wigner distribution function, can be regarded as the Schrödinger equation of an extended Hamiltonian system where its "position" and "momentum" correspond to the middle point of two points of the original phase space and the difference between them, respectively. Then we show that various phase-space semiclassical propagators can be formulated just by applying existing semiclassical propagators to the extended system. As a result, a phase space version of the Van Vleck propagator, the initial-value Van Vleck propagator, the Herman-Kluk propagator, and the thawed Gaussian approximation are obtained. In addition, we numerically compare the initial-value phase-space Van Vleck propagator, the phase-space Herman-Kluk propagator, and the classical mechanical propagation as approximation methods for the time propagation of the Wigner distribution function in terms of both accuracy and convergence speed. As a result, we find that the convergence speed of the Van Vleck propagator is far slower than others as is the case of the Hilbert space, and the Herman-Kluk propagator keeps its accuracy for a long period compared with the classical mechanical propagation while the convergence speed of the latter is faster than the former.

  17. Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.

    PubMed

    Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo

    2011-07-01

    Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.

  18. Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2011-01-01

    Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyrya, Vitaliy; Mourad, Hashem Mohamed

    We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.

  20. C–IBI: Targeting cumulative coordination within an iterative protocol to derive coarse-grained models of (multi-component) complex fluids

    DOE PAGES

    de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...

    2016-05-03

    We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.

  1. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    The application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems is discussed. The methods consist of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line-, or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to that of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  2. Controller design approach based on linear programming.

    PubMed

    Tanaka, Ryo; Shibasaki, Hiroki; Ogawa, Hiromitsu; Murakami, Takahiro; Ishida, Yoshihisa

    2013-11-01

    This study explains and demonstrates the design method for a control system with a load disturbance observer. Observer gains are determined by linear programming (LP) in terms of the Routh-Hurwitz stability criterion and the final-value theorem. In addition, the control model has a feedback structure, and feedback gains are determined to be the linear quadratic regulator. The simulation results confirmed that compared with the conventional method, the output estimated by our proposed method converges to a reference input faster when a load disturbance is added to a control system. In addition, we also confirmed the effectiveness of the proposed method by performing an experiment with a DC motor. © 2013 ISA. Published by ISA. All rights reserved.

  3. High-Order Energy Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2009-01-01

    A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  4. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  5. A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.

    2017-03-01

    The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.

  6. A self-taught artificial agent for multi-physics computational model personalization.

    PubMed

    Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin

    2016-12-01

    Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.

  7. Inter-rater agreement of comorbid DSM-IV personality disorders in substance abusers.

    PubMed

    Hesse, Morten; Thylstrup, Birgitte

    2008-05-17

    Little is known about the inter-rater agreement of personality disorders in clinical settings. Clinicians rated 75 patients with substance use disorders on the DSM-IV criteria of personality disorders in random order, and on rating scales representing the severity of each. Convergent validity agreement was moderate (range for r = 0.55, 0.67) for cluster B disorders rated with DSM-IV criteria, and discriminant validity was moderate for eight of the ten personality disorders. Convergent validity of the rating scales was only moderate for antisocial and narcissistic personality disorder. Dimensional ratings may be used in research studies and clinical practice with some caution, and may be collected as one of several sources of information to describe the personality of a patient.

  8. The Pace of Cultural Evolution

    PubMed Central

    Perreault, Charles

    2012-01-01

    Today, humans inhabit most of the world’s terrestrial habitats. This observation has been explained by the fact that we possess a secondary inheritance mechanism, culture, in addition to a genetic system. Because it is assumed that cultural evolution occurs faster than biological evolution, humans can adapt to new ecosystems more rapidly than other animals. This assumption, however, has never been tested empirically. Here, I compare rates of change in human technologies to rates of change in animal morphologies. I find that rates of cultural evolution are inversely correlated with the time interval over which they are measured, which is similar to what is known for biological rates. This correlation explains why the pace of cultural evolution appears faster when measured over recent time periods, where time intervals are often shorter. Controlling for the correlation between rates and time intervals, I show that (1) cultural evolution is faster than biological evolution; (2) this effect holds true even when the generation time of species is controlled for; and (3) culture allows us to evolve over short time scales, which are normally accessible only to short-lived species, while at the same time allowing for us to enjoy the benefits of having a long life history. PMID:23024804

  9. Multi-element microelectropolishing method

    DOEpatents

    Lee, P.J.

    1994-10-11

    A method is provided for microelectropolishing a transmission electron microscopy nonhomogeneous multi-element compound foil. The foil is electrolyzed at different polishing rates for different elements by rapidly cycling between different current densities. During a first portion of each cycle at a first voltage a first element electrolyzes at a higher current density than a second element such that the material of the first element leaves the anode foil at a faster rate than the second element and creates a solid surface film, and such that the solid surface film is removed at a faster rate than the first element leaves the anode foil. During a second portion of each cycle at a second voltage the second element electrolyzes at a higher current density than the first element, and the material of the second element leaves the anode foil at a faster rate than the first element and creates a solid surface film, and the solid surface film is removed at a slower rate than the second element leaves the foil. The solid surface film is built up during the second portion of the cycle, and removed during the first portion of the cycle. 10 figs.

  10. Bivariate tensor product [Formula: see text]-analogue of Kantorovich-type Bernstein-Stancu-Schurer operators.

    PubMed

    Cai, Qing-Bo; Xu, Xiao-Wei; Zhou, Guorong

    2017-01-01

    In this paper, we construct a bivariate tensor product generalization of Kantorovich-type Bernstein-Stancu-Schurer operators based on the concept of [Formula: see text]-integers. We obtain moments and central moments of these operators, give the rate of convergence by using the complete modulus of continuity for the bivariate case and estimate a convergence theorem for the Lipschitz continuous functions. We also give some graphs and numerical examples to illustrate the convergence properties of these operators to certain functions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, E.W.

    A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.

  12. Convergence of strain energy release rate components for edge-delaminated composite laminates

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Crews, J. H., Jr.; Aminpour, M. A.

    1987-01-01

    Strain energy release rates for edge delaminated composite laminates were obtained using quasi 3 dimensional finite element analysis. The problem of edge delamination at the -35/90 interfaces of an 8-ply composite laminate subjected to uniform axial strain was studied. The individual components of the strain energy release rates did not show convergence as the delamination tip elements were made smaller. In contrast, the total strain energy release rate converged and remained unchanged as the delamination tip elements were made smaller and agreed with that calculated using a classical laminated plate theory. The studies of the near field solutions for a delamination at an interface between two dissimilar isotropic or orthotropic plates showed that the imaginary part of the singularity is the cause of the nonconvergent behavior of the individual components. To evaluate the accuracy of the results, an 8-ply laminate with the delamination modeled in a thin resin layer, that exists between the -35 and 90 plies, was analyzed. Because the delamination exists in a homogeneous isotropic material, the oscillatory component of the singularity vanishes.

  13. Higher rate alternative non-drug reinforcement produces faster suppression of cocaine seeking but more resurgence when removed.

    PubMed

    Craig, Andrew R; Nall, Rusty W; Madden, Gregory J; Shahan, Timothy A

    2016-06-01

    Relapse following removal of an alternative source of reinforcement introduced during extinction of a target behavior is called resurgence. This form of relapse may be related to relapse of drug taking following loss of alternative non-drug reinforcement in human populations. Laboratory investigations of factors mediating resurgence with food-maintained behavior suggest higher rates of alternative reinforcement produce faster suppression of target behavior but paradoxically generate more relapse when alternative reinforcement is discontinued. At present, it is unknown if a similar effect occurs when target behavior is maintained by drug reinforcement and the alternative is a non-drug reinforcer. In the present experiment three groups of rats were trained to lever press for infusions of cocaine during baseline. Next, during treatment, cocaine reinforcement was suspended and an alternative response was reinforced with either high-rate, low-rate, or no alternative food reinforcement. Finally, all reinforcement was suspended to test for relapse of cocaine seeking. Higher rate alternative reinforcement produced faster elimination of cocaine seeking than lower rates or extinction alone, but when treatment was suspended resurgence of cocaine seeking occurred following only high-rate alternative reinforcement. Thus, although higher rate alternative reinforcement appears to more effectively suppress drug seeking, should it become unavailable, it can have the unfortunate effect of increasing relapse. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Shanghai: a study on the spatial growth of population and economy in a Chinese metropolitan area.

    PubMed

    Zhu, J

    1995-01-01

    In this study of the growth in population and industry in Shanghai, China, between the 1982 and 1990 censuses, data on administrative divisions was normalized through digitization and spatial analysis. Analysis focused on spatial units, intensity of growth, time period, distance, rate of growth, and direction of spatial growth. The trisection method divided the city into city proper, outskirts, and suburbs. The distance function method considered the distance from center city as a function: exponential, power, trigonometric, logarithmic, and polynomial. Population growth and employment in all sectors increased in the outskirts and suburbs and decreased in the city proper except tertiary sectors. Primary sector employment decreased in all three sections. Employment in the secondary increased faster in the outskirts and suburbs than the total rate of growth of population and employment. In the city secondary sector employment rates decreased faster than total population and employment rates. The tertiary sector had the highest rate of growth in all sections, and employment grew faster than secondary sector rates. Tertiary growth was highest in real estate, finance, and insurance. Industrial growth in the secondary sector was 160.2% in the suburbs, 156.6% in the outskirts, and 80.9% in the city. In the distance function analysis, industry expanded further out than the entire secondary sector. Commerce grew the fastest in areas 15.4 km from center city. Economic growth was faster after economic reforms in 1978. Growth was led by industry and followed by the secondary sector, the tertiary sector, and population. Industrial expansion resulted from inner pressure, political factors controlling size, the social and economic system, and the housing construction and distribution system. Initially sociopsychological factors affected urban concentration.

  15. Reynolds and Prandtl number scaling of viscous heating in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Pushkarev, Andrey; Balarac, Guillaume; Bos, Wouter J. T.

    2017-08-01

    Viscous heating is investigated using high-resolution direct numerical simulations. Scaling relations are derived and verified for different values of the Reynolds and Prandtl numbers. The scaling of the heat fluctuations is shown to depend on Lagrangian correlation times and on the scaling of dissipation-rate fluctuations. The convergence of the temperature spectrum to asymptotic scaling is observed to be slow, due to the broadband character of the temperature production spectrum and the slow convergence of the dissipation-rate spectrum to its asymptotic form.

  16. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  17. Faster Self-paced Rate of Drinking for Alcohol Mixed with Energy Drinks versus Alcohol Alone

    PubMed Central

    Marczinski, Cecile A.; Fillmore, Mark T.; Maloney, Sarah F.; Stamates, Amy L.

    2016-01-01

    The consumption of alcohol mixed with energy drinks (AmED) has been associated with higher rates of binge drinking and impaired driving when compared to alcohol alone. However, it remains unclear why the risks of use of AmED are heightened compared to alcohol alone even when the doses of alcohol consumed are similar. Therefore, the purpose of this laboratory study was to investigate if the rate of self-paced beverage consumption was faster for a dose of AmED versus alcohol alone using a double-blind, within-subjects, placebo-controlled study design. Participants (n = 16) of equal gender who were social drinkers attended 4 separate test sessions that involved consumption of alcohol (1.97 ml/kg vodka) and energy drinks, alone and in combination. On each test day, the dose assigned was divided into 10 cups. Participants were informed that they would have a two hour period to consume the 10 drinks. After the self-paced drinking period, participants completed a cued go/no-go reaction time task and subjective ratings of stimulation and sedation. The results indicated that participants consumed the AmED dose significantly faster (by approximately 16 minutes) than the alcohol dose. For the performance task, participants’ mean reaction times were slower in the alcohol conditions and faster in the energy drink conditions. In conclusion, alcohol consumers should be made aware that rapid drinking might occur for AmED beverages thus heightening alcohol-related safety risks. The fast rate of drinking may be related to the generalized speeding of responses following energy drink consumption. PMID:27819431

  18. Intrasubject Predictions of Vocational Preference: Convergent Validation via the Decision Theoretic Paradigm.

    ERIC Educational Resources Information Center

    Monahan, Carlyn J.; Muchinsky, Paul M.

    1985-01-01

    The degree of convergent validity among four methods of identifying vocational preferences is assessed via the decision theoretic paradigm. Vocational preferences identified by Holland's Vocational Preference Inventory (VPI), a rating procedure, and ranking were compared with preferences identified from a policy-capturing model developed from an…

  19. The Early Development Instrument: An Examination of Convergent and Discriminant Validity

    ERIC Educational Resources Information Center

    Hymel, Shelley; LeMare, Lucy; McKee, William

    2011-01-01

    The convergent and discriminant validity of the Early Development Instrument (EDI), a teacher-rated assessment of children's "school readiness", was investigated in a multicultural sample of 267 kindergarteners (53% male). Teachers evaluations on the EDI, both overall and in five domains (physical health/well-being, social competence,…

  20. Effects of music tempo upon submaximal cycling performance.

    PubMed

    Waterhouse, J; Hudson, P; Edwards, B

    2010-08-01

    In an in vivo laboratory controlled study, 12 healthy male students cycled at self-chosen work-rates while listening to a program of six popular music tracks of different tempi. The program lasted about 25 min and was performed on three occasions--unknown to the participants, its tempo was normal, increased by 10% or decreased by 10%. Work done, distance covered and cadence were measured at the end of each track, as were heart rate and subjective measures of exertion, thermal comfort and how much the music was liked. Speeding up the music program increased distance covered/unit time, power and pedal cadence by 2.1%, 3.5% and 0.7%, respectively; slowing the program produced falls of 3.8%, 9.8% and 5.9%. Average heart rate changes were +0.1% (faster program) and -2.2% (slower program). Perceived exertion and how much the music was liked increased (faster program) by 2.4% and 1.3%, respectively, and decreased (slower program) by 3.6% and 35.4%. That is, healthy individuals performing submaximal exercise not only worked harder with faster music but also chose to do so and enjoyed the music more when it was played at a faster tempo. Implications of these findings for improving training regimens are discussed.

  1. Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.

    PubMed

    Lam, Clifford; Fan, Jianqing

    2009-01-01

    This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.

  2. RATES OF SOLVOLYSIS OF SOME DEUTERATED 2-PHENYLETHYL p-TOLUENESULFONATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, W.H. Jr.; Asperger, S.; Edison, D.H.

    1958-05-20

    Rates of solvolysis of 2-phenylethyl (Ia), b 2/ (Ic) p-toluenesulfonates were determined in formic and in acetic acid. In formolysis Ia and Ic react at the same rate, but Ia reacts 17 plus or minus 2% faster than Ib. In acetolysis small effects are observed with both deuterated com. pounds: Ia is 3 plus or minus 1% faster than Ib and 4 plus or minus 3% faster than Ic. The formates and acetates produced in the solvolyses were converted to the corresponding 2phenylethanols II. Comparison of the infrared spectra of the products with those of synthetic mixture of IIb andmore » IIc revealed that ca. 45% phenyl migration had occurred in the formolysis and ca. 10% phenyl migration in acetolysis. These results suggest that phenyl participation predominates in formolysis, but is unimportant in acetolysis. The nature of the transition state in phenyl- participation reactions and the factors contributing to secondary deuterium isotope effects are discussed. (auth)« less

  3. Use of Picard and Newton iteration for solving nonlinear ground water flow equations

    USGS Publications Warehouse

    Mehl, S.

    2006-01-01

    This study examines the use of Picard and Newton iteration to solve the nonlinear, saturated ground water flow equation. Here, a simple three-node problem is used to demonstrate the convergence difficulties that can arise when solving the nonlinear, saturated ground water flow equation in both homogeneous and heterogeneous systems with and without nonlinear boundary conditions. For these cases, the characteristic types of convergence patterns are examined. Viewing these convergence patterns as orbits of an attractor in a dynamical system provides further insight. It is shown that the nonlinearity that arises from nonlinear head-dependent boundary conditions can cause more convergence difficulties than the nonlinearity that arises from flow in an unconfined aquifer. Furthermore, the effects of damping on both convergence and convergence rate are investigated. It is shown that no single strategy is effective for all problems and how understanding pitfalls and merits of several methods can be helpful in overcoming convergence difficulties. Results show that Picard iterations can be a simple and effective method for the solution of nonlinear, saturated ground water flow problems.

  4. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE PAGES

    Gnedin, Nickolay Y.

    2016-04-01

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  5. Cosmic Reionization On Computers: Numerical and Physical Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnedin, Nickolay Y.

    In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less

  6. Probing the water distribution in porous model sands with two immiscible fluids: A nuclear magnetic resonance micro-imaging study

    NASA Astrophysics Data System (ADS)

    Lee, Bum Han; Lee, Sung Keun

    2017-10-01

    The effect of the structural heterogeneity of porous networks on the water distribution in porous media, initially saturated with immiscible fluid followed by increasing durations of water injection, remains one of the important problems in hydrology. The relationship among convergence rates (i.e., the rate of fluid saturation with varying injection time) and the macroscopic properties and structural parameters of porous media have been anticipated. Here, we used nuclear magnetic resonance (NMR) micro-imaging to obtain images (down to ∼50 μm resolution) of the distribution of water injected for varying durations into porous networks that were initially saturated with silicone oil. We then established the relationships among the convergence rates, structural parameters, and transport properties of porous networks. The volume fraction of the water phase increases as the water injection duration increases. The 3D images of the water distributions for silica gel samples are similar to those of the glass bead samples. The changes in water saturation (and the accompanying removal of silicone oil) and the variations in the volume fraction, specific surface area, and cube-counting fractal dimension of the water phase fit well with the single-exponential recovery function { f (t) = a [ 1 -exp (- λt) ] } . The asymptotic values (a, i.e., saturated value) of the properties of the volume fraction, specific surface area, and cube-counting fractal dimension of the glass bead samples were greater than those for the silica gel samples primarily because of the intrinsic differences in the porous networks and local distribution of the pore size and connectivity. The convergence rates of all of the properties are inversely proportional to the entropy length and permeability. Despite limitations of the current study, such as insufficient resolution and uncertainty for the estimated parameters due to sparsely selected short injection times, the observed trends highlight the first analyses of the cube-counting fractal dimension (and other structural properties) and convergence rates in porous networks consisting of two fluid components. These results indicate that the convergence rates correlate with the geometric factor that characterizes the porous networks and transport property of the porous networks.

  7. Present-day uplift of the western Alps.

    PubMed

    Nocquet, J-M; Sue, C; Walpersdorf, A; Tran, T; Lenôtre, N; Vernant, P; Cushing, M; Jouanne, F; Masson, F; Baize, S; Chéry, J; van der Beek, P A

    2016-06-27

    Collisional mountain belts grow as a consequence of continental plate convergence and eventually disappear under the combined effects of gravitational collapse and erosion. Using a decade of GPS data, we show that the western Alps are currently characterized by zero horizontal velocity boundary conditions, offering the opportunity to investigate orogen evolution at the time of cessation of plate convergence. We find no significant horizontal motion within the belt, but GPS and levelling measurements independently show a regional pattern of uplift reaching ~2.5 mm/yr in the northwestern Alps. Unless a low viscosity crustal root under the northwestern Alps locally enhances the vertical response to surface unloading, the summed effects of isostatic responses to erosion and glaciation explain at most 60% of the observed uplift rates. Rock-uplift rates corrected from transient glacial isostatic adjustment contributions likely exceed erosion rates in the northwestern Alps. In the absence of active convergence, the observed surface uplift must result from deep-seated processes.

  8. An automatic multigrid method for the solution of sparse linear systems

    NASA Technical Reports Server (NTRS)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  9. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  10. Muscular Oxygen Uptake Kinetics in Aged Adults.

    PubMed

    Koschate, J; Drescher, U; Baum, K; Eichberg, S; Schiffer, T; Latsch, J; Brixius, K; Hoffmann, U

    2016-06-01

    Pulmonary oxygen uptake (V˙O2) kinetics and heart rate kinetics are influenced by age and fitness. Muscular V˙O2 kinetics can be estimated from heart rate and pulmonary V˙O2. In this study the applicability of a test using pseudo-random binary sequences in combination with a model to estimate muscular V˙O2 kinetics was tested. Muscular V˙O2 kinetics were expected to be faster than pulmonary V˙O2 kinetics, slowed in aged subjects and correlated with maximum V˙O2 and heart rate kinetics. 27 elderly subjects (73±3 years; 81.1±8.2 kg; 175±4.7 cm) participated. Cardiorespiratory kinetics were assessed using the maximum of cross-correlation functions, higher maxima implying faster kinetics. Muscular V˙O2 kinetics were faster than pulmonary V˙O2 kinetics (0.31±0.1 vs. 0.29±0.1 s; p=0.004). Heart rate kinetics were not correlated with muscular or pulmonary V˙O2 kinetics or maximum V˙O2. Muscular V˙O2 kinetics correlated with maximum V˙O2 (r=0.35; p=0.033). This suggests, that muscular V˙O2 kinetics are faster than estimates from pulmonary V˙O2 and related to maximum V˙O2 in aged subjects. In the future this experimental approach may help to characterize alterations in muscular V˙O2 under various conditions independent of motivation and maximal effort. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP

    NASA Technical Reports Server (NTRS)

    Gupta, V. K.; Zillmer, S. D.; Allison, R. E.

    1986-01-01

    The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.

  12. Semantic size does not matter: "bigger" words are not recognized faster.

    PubMed

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  13. Implementation of a roughness element to trip transition in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Boudet, J.; Monier, J.-F.; Gao, F.

    2015-02-01

    In aerodynamics, the laminar or turbulent regime of a boundary layer has a strong influence on friction or heat transfer. In practical applications, it is sometimes necessary to trip the transition to turbulent, and a common way is by use of a roughness element ( e.g. a step) on the wall. The present paper is concerned with the numerical implementation of such a trip in large-eddy simulations. The study is carried out on a flat-plate boundary layer configuration, with Reynolds number Rex=1.3×106. First, this work brings the opportunity to introduce a practical methodology to assess convergence in large-eddy simulations. Second, concerning the trip implementation, a volume source term is proposed and is shown to yield a smoother and faster transition than a grid step. Moreover, it is easier to implement and more adaptable. Finally, two subgrid-scale models are tested: the WALE model of Nicoud and Ducros ( Flow Turbul. Combust., vol. 62, 1999) and the shear-improved Smagorinsky model of Lévêque et al. ( J. Fluid Mech., vol. 570, 2007). Both models allow transition, but the former appears to yield a faster transition and a better prediction of friction in the turbulent regime.

  14. Direct Methods for Predicting Movement Biomechanics Based Upon Optimal Control Theory with Implementation in OpenSim.

    PubMed

    Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G

    2016-08-01

    The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.

  15. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT

    PubMed Central

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of “premature convergence,” that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment. PMID:29181020

  16. Chaos Quantum-Behaved Cat Swarm Optimization Algorithm and Its Application in the PV MPPT.

    PubMed

    Nie, Xiaohua; Wang, Wei; Nie, Haoyao

    2017-01-01

    Cat Swarm Optimization (CSO) algorithm was put forward in 2006. Despite a faster convergence speed compared with Particle Swarm Optimization (PSO) algorithm, the application of CSO is greatly limited by the drawback of "premature convergence," that is, the possibility of trapping in local optimum when dealing with nonlinear optimization problem with a large number of local extreme values. In order to surmount the shortcomings of CSO, Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed in this paper. Firstly, Quantum-behaved Cat Swarm Optimization (QCSO) algorithm improves the accuracy of the CSO algorithm, because it is easy to fall into the local optimum in the later stage. Chaos Quantum-behaved Cat Swarm Optimization (CQCSO) algorithm is proposed by introducing tent map for jumping out of local optimum in this paper. Secondly, CQCSO has been applied in the simulation of five different test functions, showing higher accuracy and less time consumption than CSO and QCSO. Finally, photovoltaic MPPT model and experimental platform are established and global maximum power point tracking control strategy is achieved by CQCSO algorithm, the effectiveness and efficiency of which have been verified by both simulation and experiment.

  17. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  18. Cenozoic forearc tectonics in northeastern Japan: Relationships between outer forearc subsidence and plate boundary kinematics

    NASA Astrophysics Data System (ADS)

    Regalla, Christine

    Here we investigate the relationships between outer forearc subsidence, the timing and kinematics of upper plate deformation and plate convergence rate in Northeast Japan to evaluate the role of plate boundary dynamics in driving forearc subsidence. The Northeastern Japan margin is one of the first non-accretionary subduction zones where regional forearc subsidence was argued to reflect tectonic erosion of large volumes of upper crustal rocks. However, we propose that a significant component of forearc subsidence could be the result of dynamic changes in plate boundary geometry. We provide new constraints on the timing and kinematics of deformation along inner forearc faults, new analyses of the evolution of outer forearc tectonic subsidence, and updated calculations of plate convergence rate. These data collectively reveal a temporal correlation between the onset of regional forearc subsidence, the initiation of upper plate extension, and an acceleration in local plate convergence rate. A similar analysis of the kinematic evolution of the Tonga, Izu-Bonin, and Mariana subduction zones indicates that the temporal correlations observed in Japan are also characteristic of these three non-accretionary margins. Comparison of these data with published geodynamic models suggests that forearc subsidence is the result of temporal variability in slab geometry due to changes in slab buoyancy and plate convergence rate. These observations suggest that a significant component of forearc subsidence at these four margins is not the product of tectonic erosion, but instead reflects changes in plate boundary dynamics driven by variable plate kinematics.

  19. A robust, finite element model for hydrostatic surface water flows

    USGS Publications Warehouse

    Walters, R.A.; Casulli, V.

    1998-01-01

    A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.

  20. Thematic knowledge, artifact concepts, and the left posterior temporal lobe: Where action and object semantics converge

    PubMed Central

    Kalénine, Solène; Buxbaum, Laurel J.

    2016-01-01

    Converging evidence supports the existence of functionally and neuroanatomically distinct taxonomic (similarity-based; e.g., hammer-screwdriver) and thematic (event-based; e.g., hammer-nail) semantic systems. Processing of thematic relations between objects has been shown to selectively recruit the left posterior temporoparietal cortex. Similar posterior regions have been also been shown to be critical for knowledge of relationships between actions and manipulable human-made objects (artifacts). Based on the hypothesis that thematic relationships for artifacts are based, at least in part, on action relationships, we assessed the prediction that the same regions of the left posterior temporoparietal cortex would be critical for conceptual processing of artifact-related actions and thematic relations for artifacts. To test this hypothesis, we evaluated processing of taxonomic and thematic relations for artifact and natural objects as well as artifact action knowledge (gesture recognition) abilities in a large sample of 48 stroke patients with a range of lesion foci in the left hemisphere. Like control participants, patients identified thematic relations faster than taxonomic relations for artifacts, whereas they identified taxonomic relations faster than thematic relations for natural objects. Moreover, response times for identifying thematic relations for artifacts selectively predicted performance in gesture recognition. Whole brain Voxel Based Lesion-Symptom Mapping (VLSM) analyses and Region of Interest (ROI) regression analyses further demonstrated that lesions to the left posterior temporal cortex, overlapping with LTO and visual motion area hMT+, were associated both with relatively slower response times in identifying thematic relations for artifacts and poorer artifact action knowledge in patients. These findings provide novel insights into the functional role of left posterior temporal cortex in thematic knowledge, and suggest that the close association between thematic relations for artifacts and action representations may reflect their common dependence on visual motion and manipulation information. PMID:27389801

  1. Temperament Measures of African-American Infants: Change and Convergence with Age

    ERIC Educational Resources Information Center

    Worobey, John; Islas-Lopez, Maria

    2009-01-01

    Studies of infant temperament are inconsistent with regard to convergence across measurement sources. In addition, little published work is available that describes temperament in minority infants. In this study, measures of temperament at three and six months were made for 24 African-American infants. Although maternal ratings of activity and…

  2. Convergent and Divergent Validity of the Grammaticality and Utterance Length Instrument

    ERIC Educational Resources Information Center

    Castilla-Earls, Anny; Fulcher-Rood, Katrina

    2018-01-01

    Purpose: This feasibility study examines the convergent and divergent validity of the Grammaticality and Utterance Length Instrument (GLi), a tool designed to assess the grammaticality and average utterance length of a child's prerecorded story retell. Method: Three raters used the GLi to rate audio-recorded story retells from 100 English-speaking…

  3. Radio-manganese, -iron, -phosphorus uptake by water hyacinth and economic implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colley, T.N.; Gonzalez, M.H.; Martin, D.F.

    To determine the effects of the deprivation of specific micronutrients on the water hyacinth (Eichhornia crassipes), the rate of uptake by the water hyacinth of iron and manganese in comparison with phosphorus was studied. Materials and methodology are described. Experimentation indicates that all three elements are actively absorbed by the root systems, but the rates of absorption differ markedly. The rate of absorption of manganese by roots is 13 and 21 times that for radio-iron and -phosphorous, and iron was taken up by the roots at nearly twice the rate of phosphorous. Manganese translocation appeared to be faster than phosphorusmore » translocation by an order of magnitude and 65 times faster than iron translocation. 9 references, 2 tables.« less

  4. Naming game with biased assimilation over adaptive networks

    NASA Astrophysics Data System (ADS)

    Fu, Guiyuan; Zhang, Weidong

    2018-01-01

    The dynamics of two-word naming game incorporating the influence of biased assimilation over adaptive network is investigated in this paper. Firstly an extended naming game with biased assimilation (NGBA) is proposed. The hearer in NGBA accepts the received information in a biased manner, where he may refuse to accept the conveyed word from the speaker with a predefined probability, if the conveyed word is different from his current memory. Secondly, the adaptive network is formulated by rewiring the links. Theoretical analysis is developed to show that the population in NGBA will eventually reach global consensus on either A or B. Numerical simulation results show that the larger strength of biased assimilation on both words, the slower convergence speed, while larger strength of biased assimilation on only one word can slightly accelerate the convergence; larger population size can make the rate of convergence slower to a large extent when it increases from a relatively small size, while such effect becomes minor when the population size is large; the behavior of adaptively reconnecting the existing links can greatly accelerate the rate of convergence especially on the sparse connected network.

  5. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  6. Independent tasks scheduling in cloud computing via improved estimation of distribution algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Haisheng; Xu, Rui; Chen, Huaping

    2018-04-01

    To minimize makespan for scheduling independent tasks in cloud computing, an improved estimation of distribution algorithm (IEDA) is proposed to tackle the investigated problem in this paper. Considering that the problem is concerned with multi-dimensional discrete problems, an improved population-based incremental learning (PBIL) algorithm is applied, which the parameter for each component is independent with other components in PBIL. In order to improve the performance of PBIL, on the one hand, the integer encoding scheme is used and the method of probability calculation of PBIL is improved by using the task average processing time; on the other hand, an effective adaptive learning rate function that related to the number of iterations is constructed to trade off the exploration and exploitation of IEDA. In addition, both enhanced Max-Min and Min-Min algorithms are properly introduced to form two initial individuals. In the proposed IEDA, an improved genetic algorithm (IGA) is applied to generate partial initial population by evolving two initial individuals and the rest of initial individuals are generated at random. Finally, the sampling process is divided into two parts including sampling by probabilistic model and IGA respectively. The experiment results show that the proposed IEDA not only gets better solution, but also has faster convergence speed.

  7. Planetary Torque in 3D Isentropic Disks

    NASA Astrophysics Data System (ADS)

    Fung, Jeffrey; Masset, Frédéric; Lega, Elena; Velasco, David

    2017-03-01

    Planetary migration is inherently a three-dimensional (3D) problem, because Earth-size planetary cores are deeply embedded in protoplanetary disks. Simulations of these 3D disks remain challenging due to the steep resolution requirements. Using two different hydrodynamics codes, FARGO3D and PEnGUIn, we simulate disk-planet interaction for a one to five Earth-mass planet embedded in an isentropic disk. We measure the torque on the planet and ensure that the measurements are converged both in resolution and between the two codes. We find that the torque is independent of the smoothing length of the planet’s potential (r s), and that it has a weak dependence on the adiabatic index of the gaseous disk (γ). The torque values correspond to an inward migration rate qualitatively similar to previous linear calculations. We perform additional simulations with explicit radiative transfer using FARGOCA, and again find agreement between 3D simulations and existing torque formulae. We also present the flow pattern around the planets that show active flow is present within the planet’s Hill sphere, and meridional vortices are shed downstream. The vertical flow speed near the planet is faster for a smaller r s or γ, up to supersonic speeds for the smallest r s and γ in our study.

  8. Neural Control of a Tracking Task via Attention-Gated Reinforcement Learning for Brain-Machine Interfaces.

    PubMed

    Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang

    2015-05-01

    Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.

  9. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fung, Jeffrey; Masset, Frédéric; Velasco, David

    Planetary migration is inherently a three-dimensional (3D) problem, because Earth-size planetary cores are deeply embedded in protoplanetary disks. Simulations of these 3D disks remain challenging due to the steep resolution requirements. Using two different hydrodynamics codes, FARGO3D and PEnGUIn, we simulate disk–planet interaction for a one to five Earth-mass planet embedded in an isentropic disk. We measure the torque on the planet and ensure that the measurements are converged both in resolution and between the two codes. We find that the torque is independent of the smoothing length of the planet’s potential ( r {sub s}), and that it hasmore » a weak dependence on the adiabatic index of the gaseous disk ( γ ). The torque values correspond to an inward migration rate qualitatively similar to previous linear calculations. We perform additional simulations with explicit radiative transfer using FARGOCA, and again find agreement between 3D simulations and existing torque formulae. We also present the flow pattern around the planets that show active flow is present within the planet’s Hill sphere, and meridional vortices are shed downstream. The vertical flow speed near the planet is faster for a smaller r {sub s} or γ , up to supersonic speeds for the smallest r {sub s} and γ in our study.« less

  11. Slab temperature controls on the Tonga double seismic zone and slab mantle dehydration

    PubMed Central

    Wei, S. Shawn; Wiens, Douglas A.; van Keken, Peter E.; Cai, Chen

    2017-01-01

    Double seismic zones are two-layered distributions of intermediate-depth earthquakes that provide insight into the thermomechanical state of subducting slabs. We present new precise hypocenters of intermediate-depth earthquakes in the Tonga subduction zone obtained using data from local island–based, ocean-bottom, and global seismographs. The results show a downdip compressional upper plane and a downdip tensional lower plane with a separation of about 30 km. The double seismic zone in Tonga extends to a depth of about 300 km, deeper than in any other subduction system. This is due to the lower slab temperatures resulting from faster subduction, as indicated by a global trend toward deeper double seismic zones in colder slabs. In addition, a line of high seismicity in the upper plane is observed at a depth of 160 to 280 km, which shallows southward as the convergence rate decreases. Thermal modeling shows that the earthquakes in this “seismic belt” occur at various pressures but at a nearly constant temperature, highlighting the important role of temperature in triggering intermediate-depth earthquakes. This seismic belt may correspond to regions where the subducting mantle first reaches a temperature of ~500°C, implying that metamorphic dehydration of mantle minerals in the slab provides water to enhance faulting. PMID:28097220

  12. Global Plate Velocities from the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Larson, Kristine M.; Freymueller, Jeffrey T.; Philipsen, Steven

    1997-01-01

    We have analyzed 204 days of Global Positioning System (GPS) data from the global GPS network spanning January 1991 through March 1996. On the basis of these GPS coordinate solutions, we have estimated velocities for 38 sites, mostly located on the interiors of the Africa, Antarctica, Australia, Eurasia, Nazca, North America, Pacific, and South America plates. The uncertainties of the horizontal velocity components range from 1.2 to 5.0 mm/yr. With the exception of sites on the Pacific and Nazca plates, the GPS velocities agree with absolute plate model predictions within 95% confidence. For most of the sites in North America, Antarctica, and Eurasia, the agreement is better than 2 mm/yr. We find no persuasive evidence for significant vertical motions (less than 3 standard deviations), except at four sites. Three of these four were sites constrained to geodetic reference frame velocities. The GPS velocities were then used to estimate angular velocities for eight tectonic plates. Absolute angular velocities derived from the GPS data agree with the no net rotation (NNR) NUVEL-1A model within 95% confidence except for the Pacific plate. Our pole of rotation for the Pacific plate lies 11.5 deg west of the NNR NUVEL-1A pole, with an angular speed 10% faster. Our relative angular velocities agree with NUVEL-1A except for some involving the Pacific plate. While our Pacific-North America angular velocity differs significantly from NUVEL-1A, our model and NUVEL-1A predict very small differences in relative motion along the Pacific-North America plate boundary itself. Our Pacific-Australia and Pacific- Eurasia angular velocities are significantly faster than NUVEL-1A, predicting more rapid convergence at these two plate boundaries. Along the East Pacific Pise, our Pacific-Nazca angular velocity agrees in both rate and azimuth with NUVFL-1A.

  13. Faster self-paced rate of drinking for alcohol mixed with energy drinks versus alcohol alone.

    PubMed

    Marczinski, Cecile A; Fillmore, Mark T; Maloney, Sarah F; Stamates, Amy L

    2017-03-01

    The consumption of alcohol mixed with energy drinks (AmED) has been associated with higher rates of binge drinking and impaired driving when compared with alcohol alone. However, it remains unclear why the risks of use of AmED are heightened compared with alcohol alone even when the doses of alcohol consumed are similar. Therefore, the purpose of this laboratory study was to investigate if the rate of self-paced beverage consumption was faster for a dose of AmED versus alcohol alone using a double-blind, within-subjects, placebo-controlled study design. Participants (n = 16) of equal gender who were social drinkers attended 4 separate test sessions that involved consumption of alcohol (1.97 ml/kg vodka) and energy drinks, alone and in combination. On each test day, the dose assigned was divided into 10 cups. Participants were informed that they would have a 2-h period to consume the 10 drinks. After the self-paced drinking period, participants completed a cued go/no-go reaction time (RT) task and subjective ratings of stimulation and sedation. The results indicated that participants consumed the AmED dose significantly faster (by ∼16 min) than the alcohol dose. For the performance task, participants' mean RTs were slower in the alcohol conditions and faster in the energy-drink conditions. In conclusion, alcohol consumers should be made aware that rapid drinking might occur for AmED beverages, thus heightening alcohol-related safety risks. The fast rate of drinking may be related to the generalized speeding of responses after energy-drink consumption. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Preliminary study of the use of the STAR-100 computer for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Keller, J. D.; Jameson, A.

    1977-01-01

    An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.

  15. Comparison of cursive models for handwriting instruction.

    PubMed

    Karlsdottir, R

    1997-12-01

    The efficiency of four different cursive handwriting styles as model alphabets for handwriting instruction of primary school children was compared in a cross-sectional field experiment from Grade 3 to 6 in terms of the average handwriting speed developed by the children and the average rate of convergence of the children's handwriting to the style of their model. It was concluded that styles with regular entry stroke patterns give the steadiest rate of convergence to the model and styles with short ascenders and descenders and strokes with not too high curvatures give the highest handwriting speed.

  16. Convergent evolution of marine mammals is associated with distinct substitutions in common genes

    PubMed Central

    Zhou, Xuming; Seim, Inge; Gladyshev, Vadim N.

    2015-01-01

    Phenotypic convergence is thought to be driven by parallel substitutions coupled with natural selection at the sequence level. Multiple independent evolutionary transitions of mammals to an aquatic environment offer an opportunity to test this thesis. Here, whole genome alignment of coding sequences identified widespread parallel amino acid substitutions in marine mammals; however, the majority of these changes were not unique to these animals. Conversely, we report that candidate aquatic adaptation genes, identified by signatures of likelihood convergence and/or elevated ratio of nonsynonymous to synonymous nucleotide substitution rate, are characterized by very few parallel substitutions and exhibit distinct sequence changes in each group. Moreover, no significant positive correlation was found between likelihood convergence and positive selection in all three marine lineages. These results suggest that convergence in protein coding genes associated with aquatic lifestyle is mainly characterized by independent substitutions and relaxed negative selection. PMID:26549748

  17. Unified gas-kinetic scheme with multigrid convergence for rarefied flow study

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2017-09-01

    The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.

  18. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  19. Faster fertilization rate in conspecific versus heterospecific matings in house mice.

    PubMed

    Dean, Matthew D; Nachman, Michael W

    2009-01-01

    Barriers to gene flow can arise at any stage in the reproductive sequence. Most studies of reproductive isolation focus on premating or postzygotic phenotypes, leaving the importance of differences in fertilization rate overlooked. Two closely related species of house mice, Mus domesticus and M. musculus, form a narrow hybrid zone in Europe, suggesting that one or more isolating factors operate in the face of ongoing gene flow. Here, we test for differences in fertilization rate using laboratory matings as well as in vitro sperm competition assays. In noncompetitive matings, we show that fertilization occurs significantly faster in conspecific versus heterospecific matings and that this difference arises after mating and before zygotes form. To further explore the mechanisms underlying this conspecific advantage, we used competitive in vitro assays to isolate gamete interactions. Surprisingly, we discovered that M. musculus sperm consistently outcompeted M. domesticus sperm regardless of which species donated ova. These results suggest that in vivo fertilization rate is mediated by interactions between sperm, the internal female environment, and/or contributions from male seminal fluid. We discuss the implications of faster conspecific fertilization in terms of reproductive isolation among these two naturally hybridizing species.

  20. Music increases alcohol consumption rate in young females.

    PubMed

    Stafford, Lorenzo D; Dodd, Hannah

    2013-10-01

    Previous field research has shown that individuals consumed more alcohol and at a faster rate in environments paired with loud music. Theoretically, this effect has been linked to approach/avoidance accounts of how music influences arousal and mood, but no work has tested this experimentally. In the present study, female participants (n = 45) consumed an alcoholic (4% alcohol-by-volume) beverage in one of three contexts: slow tempo music, fast tempo music, or a no-music control. Results revealed that, compared with the control, the beverage was consumed fastest in the two music conditions. Interestingly, whereas arousal and negative mood declined in the control condition, this was not the case for either of the music conditions, suggesting a downregulation of alcohol effects. We additionally found evidence for music to disrupt sensory systems in that, counterintuitively, faster consumption was driven by increases in perceived alcohol strength, which, in turn, predicted lower breath alcohol level (BrAL). These findings suggest a unique interaction of music environment and psychoactive effects of alcohol itself on consumption rate. Because alcohol consumed at a faster rate induces greater intoxication, these findings have implications for applied and theoretical work. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  1. The effects of L-theanine, caffeine and their combination on cognition and mood.

    PubMed

    Haskell, Crystal F; Kennedy, David O; Milne, Anthea L; Wesnes, Keith A; Scholey, Andrew B

    2008-02-01

    L-Theanine is an amino acid found naturally in tea. Despite the common consumption of L-theanine, predominantly in combination with caffeine in the form of tea, only one study to date has examined the cognitive effects of this substance alone, and none have examined its effects when combined with caffeine. The present randomised, placebo-controlled, double-blind, balanced crossover study investigated the acute cognitive and mood effects of L-theanine (250 mg), and caffeine (150 mg), in isolation and in combination. Salivary caffeine levels were co-monitored. L-Theanine increased 'headache' ratings and decreased correct serial seven subtractions. Caffeine led to faster digit vigilance reaction time, improved Rapid Visual Information Processing (RVIP) accuracy and attenuated increases in self-reported 'mental fatigue'. In addition to improving RVIP accuracy and 'mental fatigue' ratings, the combination also led to faster simple reaction time, faster numeric working memory reaction time and improved sentence verification accuracy. 'Headache' and 'tired' ratings were reduced and 'alert' ratings increased. There was also a significant positive caffeine x L-theanine interaction on delayed word recognition reaction time. These results suggest that beverages containing L-theanine and caffeine may have a different pharmacological profile to those containing caffeine alone.

  2. Modeling weakly-ionized plasmas in magnetic field: A new computationally-efficient approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parent, Bernard, E-mail: parent@pusan.ac.kr; Macheret, Sergey O.; Shneider, Mikhail N.

    2015-11-01

    Despite its success at simulating accurately both non-neutral and quasi-neutral weakly-ionized plasmas, the drift-diffusion model has been observed to be a particularly stiff set of equations. Recently, it was demonstrated that the stiffness of the system could be relieved by rewriting the equations such that the potential is obtained from Ohm's law rather than Gauss's law while adding some source terms to the ion transport equation to ensure that Gauss's law is satisfied in non-neutral regions. Although the latter was applicable to multicomponent and multidimensional plasmas, it could not be used for plasmas in which the magnetic field was significant.more » This paper hence proposes a new computationally-efficient set of electron and ion transport equations that can be used not only for a plasma with multiple types of positive and negative ions, but also for a plasma in magnetic field. Because the proposed set of equations is obtained from the same physical model as the conventional drift-diffusion equations without introducing new assumptions or simplifications, it results in the same exact solution when the grid is refined sufficiently while being more computationally efficient: not only is the proposed approach considerably less stiff and hence requires fewer iterations to reach convergence but it yields a converged solution that exhibits a significantly higher resolution. The combined faster convergence and higher resolution is shown to result in a hundredfold increase in computational efficiency for some typical steady and unsteady plasma problems including non-neutral cathode and anode sheaths as well as quasi-neutral regions.« less

  3. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    NASA Astrophysics Data System (ADS)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  4. Ab initio elastic tensor of cubic Ti0.5Al0.5N alloys: Dependence of elastic constants on size and shape of the supercell model and their convergence

    NASA Astrophysics Data System (ADS)

    Tasnádi, Ferenc; Odén, M.; Abrikosov, Igor A.

    2012-04-01

    In this study we discuss the performance of the special quasirandom structure (SQS) method in predicting the elastic properties of B1 (rocksalt) Ti0.5Al0.5N alloy. We use a symmetry-based projection technique, which gives the closest cubic approximate of the elastic tensor and allows us to align the SQSs of different shapes and sizes for a comparison in modeling elastic tensors. We show that the derived closest cubic approximate of the elastic tensor converges faster with respect to SQS size than the elastic tensor itself. That establishes a less demanding computational strategy to achieve convergence for the elastic constants. We determine the cubic elastic constants (Cij) and Zener's type elastic anisotropy (A) of Ti0.5Al0.5N. Optimal supercells, which capture accurately both the configurational disorder and cubic symmetry of elastic tensor, result in C11=447 GPa, C12=158 GPa, and C44=203 GPa with 3% of error and A=1.40 with 6% of error. In addition, we establish the general importance of selecting proper SQS with symmetry arguments to reliably model elasticity of alloys. We suggest the calculation of nine elastic tensor elements: C11, C22, C33, C12, C13, C23, C44, C55, and C66, to analyze the performance of SQSs and predict elastic constants of cubic alloys. The described methodology is general enough to be extended for alloys with other symmetry at arbitrary composition.

  5. Reliability enhancement of Navier-Stokes codes through convergence acceleration

    NASA Technical Reports Server (NTRS)

    Merkle, Charles L.; Dulikravich, George S.

    1995-01-01

    Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.

  6. Cellulose nanomaterials as additives for cementitious materials

    Treesearch

    Tengfei Fu; Robert J. Moon; Pablo Zavatierri; Jeffrey Youngblood; William Jason Weiss

    2017-01-01

    Cementitious materials cover a very broad area of industries/products (buildings, streets and highways, water and waste management, and many others; see Fig. 20.1). Annual production of cements is on the order of 4 billion metric tons [2]. In general these industries want stronger, cheaper, more durable concrete, with faster setting times, faster rates of strength gain...

  7. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.

    PubMed

    Zelyak, O; Fallone, B G; St-Aubin, J

    2017-12-14

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  8. Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Zelyak, O.; Fallone, B. G.; St-Aubin, J.

    2018-01-01

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.

  9. Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".

    PubMed

    Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel

    2018-03-12

    Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.

  10. Fatigue crack growth in SA508-CL2 steel in a high temperature, high purity water environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, T.L.; Heald, J.D.; Kiss, E.

    1974-10-01

    Fatigue crack growth tests were conducted with 1 in. plate specimens of SA508-CL 2 steel in room temperature air, 550$sup 0$F air and in a 550$sup 0$F, high purity, water environment. Zero-tension load controlled tests were run at cyclic frequencies as low as 0.037 CPM. Results show that growth rates in the simulated Boiling Water Reactor (BWR) water environment are faster than growth rates observed in 550$sup 0$F air and these rates are faster than the room temperature rate. In the BWR water environment, lowering the cyclic frequency from 0.37 to 0.037 CPM caused only a slight increase in themore » fatigue crack growth rate. All growth rates measured in these tests were below the upper bound design curve presented in Section XI of the ASME Code. (auth)« less

  11. A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2008-01-01

    A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  12. Enantioselective behaviour of tetraconazole during strawberry wine-making process.

    PubMed

    Liu, Na; Pan, Xinglu; Zhang, Shuang; Ji, Mingshan; Zhang, Zhihong

    2018-05-01

    The fate of tetraconazole enantiomers in strawberries during wine-making process was studied. The residues were determined by ultra-performance convergence chromatography tandem triple quadrupole mass spectrometry after each process steps. Results indicated that there was significant enantioselective dissipation of tetraconazole enantiomers during the fermentation process. And (-)-tetraconazole degraded faster than (+)-tetraconazole. The half-lives of (-)-tetraconazole and (+)-tetraconazole were 3.12, 3.76 days with washing procedure and 3.18, 4.05 days without washing procedure. The processing factors of strawberry wine samples after each step were generally less than 1. In particular, the processing factors of the fermentation process were the lowest. The results could help facilitate more accurate risk assessments of tetraconazole during wine-making process. © 2018 Wiley Periodicals, Inc.

  13. A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2009-01-01

    A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.

  14. On optimal improvements of classical iterative schemes for Z-matrices

    NASA Astrophysics Data System (ADS)

    Noutsos, D.; Tzoumas, M.

    2006-04-01

    Many researchers have considered preconditioners, applied to linear systems, whose matrix coefficient is a Z- or an M-matrix, that make the associated Jacobi and Gauss-Seidel methods converge asymptotically faster than the unpreconditioned ones. Such preconditioners are chosen so that they eliminate the off-diagonal elements of the same column or the elements of the first upper diagonal [Milaszewicz, LAA 93 (1987) 161-170], Gunawardena et al. [LAA 154-156 (1991) 123-143]. In this work we generalize the previous preconditioners to obtain optimal methods. "Good" Jacobi and Gauss-Seidel algorithms are given and preconditioners, that eliminate more than one entry per row, are also proposed and analyzed. Moreover, the behavior of the above preconditioners to the Krylov subspace methods is studied.

  15. Single-pass incremental force updates for adaptively restrained molecular dynamics.

    PubMed

    Singh, Krishna Kant; Redon, Stephane

    2018-03-30

    Adaptively restrained molecular dynamics (ARMD) allows users to perform more integration steps in wall-clock time by switching on and off positional degrees of freedoms. This article presents new, single-pass incremental force updates algorithms to efficiently simulate a system using ARMD. We assessed different algorithms for speedup measurements and implemented them in the LAMMPS MD package. We validated the single-pass incremental force update algorithm on four different benchmarks using diverse pair potentials. The proposed algorithm allows us to perform simulation of a system faster than traditional MD in both NVE and NVT ensembles. Moreover, ARMD using the new single-pass algorithm speeds up the convergence of observables in wall-clock time. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Choice of Control Variables in Variational Data Assimilation and Its Analysis and Forecast Impact

    NASA Astrophysics Data System (ADS)

    Xie, Yuanfu; Sun, Jenny; Fang, Wei-ting

    2014-05-01

    Choice of control variables directly impacts the analysis qualify of a variational data assimilation and its forecasts. A theory on selecting control variables for wind and moisture field is introduced for 3DVAR or 4DVAR. For a good control variable selection, Parseval's theory is applied to 3-4DVAR and the behavior of different control variables is illustrated in physical and Fourier space in terms of minimization condition, meteorological dynamic scales and practical implementation. The computational and meteorological benefits will be discussed. Numerical experiments have been performed using WRF-DA for wind control variables and CRTM for moisture control variables. It is evident of the WRF forecast improvement and faster convergence of CRTM satellite data assimilation.

  17. Convergent Validity of and Bias in Maternal Reports of Child Emotion

    ERIC Educational Resources Information Center

    Durbin, C. Emily; Wilson, Sylia

    2012-01-01

    This study examined the convergent validity of maternal reports of child emotion in a sample of 190 children between the ages of 3 and 6. Children completed a battery of 10 emotion-eliciting laboratory tasks; their mothers and untrained naive observers rated child emotions (happiness, surprise, fear, sadness, and anger) following each task, and…

  18. Convergent Validity of the Autism Spectrum Disorder-Diagnostic for Children (ASD-DC) and Childhood Autism Rating Scales (CARS)

    ERIC Educational Resources Information Center

    Matson, Johnny L.; Mahan, Sara; Hess, Julie A.; Fodstad, Jill C.; Neal, Daniene

    2010-01-01

    Previous studies analyzed the reliability as well as sensitivity and specificity of the Autism Spectrum Disorder-Diagnostic for Children (ASD-DC). This study further examines the psychometric properties of the ASD-DC by assessing whether the ASD-DC has convergent validity against a psychometrically sound observational instrument for Autistic…

  19. Further Validation of the IDAS: Evidence of Convergent, Discriminant, Criterion, and Incremental Validity

    ERIC Educational Resources Information Center

    Watson, David; O'Hara, Michael W.; Chmielewski, Michael; McDade-Montez, Elizabeth A.; Koffel, Erin; Naragon, Kristin; Stuart, Scott

    2008-01-01

    The authors explicated the validity of the Inventory of Depression and Anxiety Symptoms (IDAS; D. Watson et al., 2007) in 2 samples (306 college students and 605 psychiatric patients). The IDAS scales showed strong convergent validity in relation to parallel interview-based scores on the Clinician Rating version of the IDAS; the mean convergent…

  20. A Reformulated Correlated Trait-Correlated Method Model for Multitrait-Multimethod Data Effectively Increases Convergence and Admissibility Rates

    ERIC Educational Resources Information Center

    Fan, Yi; Lance, Charles E.

    2017-01-01

    The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…

Top